00:00:00.001 Started by upstream project "autotest-per-patch" build number 132421 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.094 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.095 The recommended git tool is: git 00:00:00.095 using credential 00000000-0000-0000-0000-000000000002 00:00:00.097 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.131 Fetching changes from the remote Git repository 00:00:00.135 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.169 Using shallow fetch with depth 1 00:00:00.169 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.169 > git --version # timeout=10 00:00:00.212 > git --version # 'git version 2.39.2' 00:00:00.212 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.251 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.251 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.407 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.417 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.428 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.428 > git config core.sparsecheckout # timeout=10 00:00:05.440 > git read-tree -mu HEAD # timeout=10 00:00:05.456 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.477 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.477 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.566 [Pipeline] Start of Pipeline 00:00:05.583 [Pipeline] library 00:00:05.585 Loading library shm_lib@master 00:00:05.586 Library shm_lib@master is cached. Copying from home. 00:00:05.655 [Pipeline] node 00:00:05.683 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.684 [Pipeline] { 00:00:05.690 [Pipeline] catchError 00:00:05.691 [Pipeline] { 00:00:05.699 [Pipeline] wrap 00:00:05.705 [Pipeline] { 00:00:05.710 [Pipeline] stage 00:00:05.712 [Pipeline] { (Prologue) 00:00:05.723 [Pipeline] echo 00:00:05.724 Node: VM-host-SM38 00:00:05.728 [Pipeline] cleanWs 00:00:05.779 [WS-CLEANUP] Deleting project workspace... 00:00:05.779 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.798 [WS-CLEANUP] done 00:00:05.967 [Pipeline] setCustomBuildProperty 00:00:06.040 [Pipeline] httpRequest 00:00:09.074 [Pipeline] echo 00:00:09.076 Sorcerer 10.211.164.101 is dead 00:00:09.084 [Pipeline] httpRequest 00:00:11.057 [Pipeline] echo 00:00:11.058 Sorcerer 10.211.164.101 is alive 00:00:11.067 [Pipeline] retry 00:00:11.069 [Pipeline] { 00:00:11.080 [Pipeline] httpRequest 00:00:11.083 HttpMethod: GET 00:00:11.084 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.084 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.095 Response Code: HTTP/1.1 200 OK 00:00:11.095 Success: Status code 200 is in the accepted range: 200,404 00:00:11.096 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.303 [Pipeline] } 00:00:13.320 [Pipeline] // retry 00:00:13.328 [Pipeline] sh 00:00:13.614 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.631 [Pipeline] httpRequest 00:00:16.259 [Pipeline] echo 00:00:16.260 Sorcerer 10.211.164.101 is alive 00:00:16.269 [Pipeline] retry 00:00:16.271 [Pipeline] { 00:00:16.286 [Pipeline] httpRequest 00:00:16.290 HttpMethod: GET 00:00:16.291 URL: http://10.211.164.101/packages/spdk_ede20dc4e93c688eb6e71dded535a45c7193fb9c.tar.gz 00:00:16.291 Sending request to url: http://10.211.164.101/packages/spdk_ede20dc4e93c688eb6e71dded535a45c7193fb9c.tar.gz 00:00:16.313 Response Code: HTTP/1.1 200 OK 00:00:16.313 Success: Status code 200 is in the accepted range: 200,404 00:00:16.314 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_ede20dc4e93c688eb6e71dded535a45c7193fb9c.tar.gz 00:01:36.772 [Pipeline] } 00:01:36.791 [Pipeline] // retry 00:01:36.799 [Pipeline] sh 00:01:37.079 + tar --no-same-owner -xf spdk_ede20dc4e93c688eb6e71dded535a45c7193fb9c.tar.gz 00:01:40.396 [Pipeline] sh 00:01:40.682 + git -C spdk log --oneline -n5 00:01:40.682 ede20dc4e lib/nvmf: Fix double free of connect request 00:01:40.682 bc5264bd5 nvme: Fix discovery loop when target has no entry 00:01:40.682 557f022f6 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:01:40.682 c0b2ac5c9 bdev: Change void to bdev_io pointer of parameter of _bdev_io_submit() 00:01:40.682 92fb22519 dif: dif_generate/verify_copy() supports NVMe PRACT = 1 and MD size > PI size 00:01:40.702 [Pipeline] writeFile 00:01:40.716 [Pipeline] sh 00:01:41.004 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:41.018 [Pipeline] sh 00:01:41.302 + cat autorun-spdk.conf 00:01:41.302 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.302 SPDK_TEST_NVME=1 00:01:41.302 SPDK_TEST_FTL=1 00:01:41.302 SPDK_TEST_ISAL=1 00:01:41.302 SPDK_RUN_ASAN=1 00:01:41.302 SPDK_RUN_UBSAN=1 00:01:41.302 SPDK_TEST_XNVME=1 00:01:41.302 SPDK_TEST_NVME_FDP=1 00:01:41.302 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:41.310 RUN_NIGHTLY=0 00:01:41.312 [Pipeline] } 00:01:41.327 [Pipeline] // stage 00:01:41.345 [Pipeline] stage 00:01:41.347 [Pipeline] { (Run VM) 00:01:41.360 [Pipeline] sh 00:01:41.646 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:41.646 + echo 'Start stage prepare_nvme.sh' 00:01:41.646 Start stage prepare_nvme.sh 00:01:41.646 + [[ -n 10 ]] 00:01:41.646 + disk_prefix=ex10 00:01:41.646 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:41.646 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:41.646 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:41.646 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.646 ++ SPDK_TEST_NVME=1 00:01:41.646 ++ SPDK_TEST_FTL=1 00:01:41.646 ++ SPDK_TEST_ISAL=1 00:01:41.646 ++ SPDK_RUN_ASAN=1 00:01:41.646 ++ SPDK_RUN_UBSAN=1 00:01:41.646 ++ SPDK_TEST_XNVME=1 00:01:41.646 ++ SPDK_TEST_NVME_FDP=1 00:01:41.646 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:41.646 ++ RUN_NIGHTLY=0 00:01:41.646 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:41.646 + nvme_files=() 00:01:41.646 + declare -A nvme_files 00:01:41.646 + backend_dir=/var/lib/libvirt/images/backends 00:01:41.646 + nvme_files['nvme.img']=5G 00:01:41.646 + nvme_files['nvme-cmb.img']=5G 00:01:41.646 + nvme_files['nvme-multi0.img']=4G 00:01:41.646 + nvme_files['nvme-multi1.img']=4G 00:01:41.646 + nvme_files['nvme-multi2.img']=4G 00:01:41.646 + nvme_files['nvme-openstack.img']=8G 00:01:41.646 + nvme_files['nvme-zns.img']=5G 00:01:41.646 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:41.646 + (( SPDK_TEST_FTL == 1 )) 00:01:41.646 + nvme_files["nvme-ftl.img"]=6G 00:01:41.646 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:41.646 + nvme_files["nvme-fdp.img"]=1G 00:01:41.646 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:41.646 + for nvme in "${!nvme_files[@]}" 00:01:41.646 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi2.img -s 4G 00:01:41.908 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:41.908 + for nvme in "${!nvme_files[@]}" 00:01:41.908 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-ftl.img -s 6G 00:01:42.848 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:42.848 + for nvme in "${!nvme_files[@]}" 00:01:42.848 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-cmb.img -s 5G 00:01:42.848 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:42.848 + for nvme in "${!nvme_files[@]}" 00:01:42.848 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-openstack.img -s 8G 00:01:42.848 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:42.848 + for nvme in "${!nvme_files[@]}" 00:01:42.848 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-zns.img -s 5G 00:01:42.848 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:42.848 + for nvme in "${!nvme_files[@]}" 00:01:42.848 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi1.img -s 4G 00:01:42.848 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:42.848 + for nvme in "${!nvme_files[@]}" 00:01:42.848 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi0.img -s 4G 00:01:43.421 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:43.421 + for nvme in "${!nvme_files[@]}" 00:01:43.421 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-fdp.img -s 1G 00:01:43.680 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:43.680 + for nvme in "${!nvme_files[@]}" 00:01:43.680 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme.img -s 5G 00:01:44.332 Formatting '/var/lib/libvirt/images/backends/ex10-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:44.332 ++ sudo grep -rl ex10-nvme.img /etc/libvirt/qemu 00:01:44.332 + echo 'End stage prepare_nvme.sh' 00:01:44.332 End stage prepare_nvme.sh 00:01:44.347 [Pipeline] sh 00:01:44.635 + DISTRO=fedora39 00:01:44.635 + CPUS=10 00:01:44.635 + RAM=12288 00:01:44.635 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:44.635 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex10-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex10-nvme.img -b /var/lib/libvirt/images/backends/ex10-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex10-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:44.635 00:01:44.635 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:44.635 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:44.635 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:44.635 HELP=0 00:01:44.635 DRY_RUN=0 00:01:44.635 NVME_FILE=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,/var/lib/libvirt/images/backends/ex10-nvme.img,/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,/var/lib/libvirt/images/backends/ex10-nvme-fdp.img, 00:01:44.635 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:44.635 NVME_AUTO_CREATE=0 00:01:44.635 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,, 00:01:44.635 NVME_CMB=,,,, 00:01:44.635 NVME_PMR=,,,, 00:01:44.635 NVME_ZNS=,,,, 00:01:44.635 NVME_MS=true,,,, 00:01:44.635 NVME_FDP=,,,on, 00:01:44.635 SPDK_VAGRANT_DISTRO=fedora39 00:01:44.635 SPDK_VAGRANT_VMCPU=10 00:01:44.635 SPDK_VAGRANT_VMRAM=12288 00:01:44.635 SPDK_VAGRANT_PROVIDER=libvirt 00:01:44.635 SPDK_VAGRANT_HTTP_PROXY= 00:01:44.635 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:44.635 SPDK_OPENSTACK_NETWORK=0 00:01:44.635 VAGRANT_PACKAGE_BOX=0 00:01:44.635 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:44.635 FORCE_DISTRO=true 00:01:44.635 VAGRANT_BOX_VERSION= 00:01:44.635 EXTRA_VAGRANTFILES= 00:01:44.635 NIC_MODEL=e1000 00:01:44.635 00:01:44.635 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:44.635 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:47.180 Bringing machine 'default' up with 'libvirt' provider... 00:01:47.749 ==> default: Creating image (snapshot of base box volume). 00:01:47.749 ==> default: Creating domain with the following settings... 00:01:47.749 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732120112_df8844f6266fb8e5cc77 00:01:47.749 ==> default: -- Domain type: kvm 00:01:47.749 ==> default: -- Cpus: 10 00:01:47.749 ==> default: -- Feature: acpi 00:01:47.749 ==> default: -- Feature: apic 00:01:47.749 ==> default: -- Feature: pae 00:01:47.749 ==> default: -- Memory: 12288M 00:01:47.749 ==> default: -- Memory Backing: hugepages: 00:01:47.749 ==> default: -- Management MAC: 00:01:47.749 ==> default: -- Loader: 00:01:47.749 ==> default: -- Nvram: 00:01:47.749 ==> default: -- Base box: spdk/fedora39 00:01:47.749 ==> default: -- Storage pool: default 00:01:47.749 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732120112_df8844f6266fb8e5cc77.img (20G) 00:01:47.749 ==> default: -- Volume Cache: default 00:01:47.749 ==> default: -- Kernel: 00:01:47.749 ==> default: -- Initrd: 00:01:47.749 ==> default: -- Graphics Type: vnc 00:01:47.749 ==> default: -- Graphics Port: -1 00:01:47.749 ==> default: -- Graphics IP: 127.0.0.1 00:01:47.749 ==> default: -- Graphics Password: Not defined 00:01:47.749 ==> default: -- Video Type: cirrus 00:01:47.749 ==> default: -- Video VRAM: 9216 00:01:47.749 ==> default: -- Sound Type: 00:01:47.749 ==> default: -- Keymap: en-us 00:01:47.749 ==> default: -- TPM Path: 00:01:47.749 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:47.749 ==> default: -- Command line args: 00:01:47.749 ==> default: -> value=-device, 00:01:47.749 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:47.749 ==> default: -> value=-drive, 00:01:47.749 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:47.749 ==> default: -> value=-device, 00:01:47.749 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:47.749 ==> default: -> value=-device, 00:01:47.749 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:47.749 ==> default: -> value=-drive, 00:01:47.749 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme.img,if=none,id=nvme-1-drive0, 00:01:47.749 ==> default: -> value=-device, 00:01:47.749 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:47.749 ==> default: -> value=-device, 00:01:47.749 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:47.749 ==> default: -> value=-drive, 00:01:47.749 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:47.749 ==> default: -> value=-device, 00:01:47.749 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:47.749 ==> default: -> value=-drive, 00:01:47.749 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:47.749 ==> default: -> value=-device, 00:01:47.749 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:47.749 ==> default: -> value=-drive, 00:01:47.749 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:47.749 ==> default: -> value=-device, 00:01:47.749 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:47.749 ==> default: -> value=-device, 00:01:47.749 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:47.749 ==> default: -> value=-device, 00:01:47.749 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:47.749 ==> default: -> value=-drive, 00:01:47.749 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:47.749 ==> default: -> value=-device, 00:01:47.749 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:47.749 ==> default: Creating shared folders metadata... 00:01:47.749 ==> default: Starting domain. 00:01:49.130 ==> default: Waiting for domain to get an IP address... 00:02:07.217 ==> default: Waiting for SSH to become available... 00:02:07.217 ==> default: Configuring and enabling network interfaces... 00:02:09.116 default: SSH address: 192.168.121.11:22 00:02:09.116 default: SSH username: vagrant 00:02:09.116 default: SSH auth method: private key 00:02:10.546 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:17.104 ==> default: Mounting SSHFS shared folder... 00:02:18.479 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:18.479 ==> default: Checking Mount.. 00:02:19.414 ==> default: Folder Successfully Mounted! 00:02:19.414 00:02:19.414 SUCCESS! 00:02:19.414 00:02:19.414 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:19.414 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:19.414 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:19.414 00:02:19.422 [Pipeline] } 00:02:19.437 [Pipeline] // stage 00:02:19.446 [Pipeline] dir 00:02:19.447 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:19.448 [Pipeline] { 00:02:19.462 [Pipeline] catchError 00:02:19.463 [Pipeline] { 00:02:19.476 [Pipeline] sh 00:02:19.754 + vagrant ssh-config --host vagrant 00:02:19.754 + sed -ne '/^Host/,$p' 00:02:19.754 + tee ssh_conf 00:02:22.377 Host vagrant 00:02:22.377 HostName 192.168.121.11 00:02:22.377 User vagrant 00:02:22.377 Port 22 00:02:22.377 UserKnownHostsFile /dev/null 00:02:22.377 StrictHostKeyChecking no 00:02:22.377 PasswordAuthentication no 00:02:22.377 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:22.377 IdentitiesOnly yes 00:02:22.377 LogLevel FATAL 00:02:22.377 ForwardAgent yes 00:02:22.377 ForwardX11 yes 00:02:22.377 00:02:22.390 [Pipeline] withEnv 00:02:22.393 [Pipeline] { 00:02:22.407 [Pipeline] sh 00:02:22.692 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:22.692 source /etc/os-release 00:02:22.692 [[ -e /image.version ]] && img=$(< /image.version) 00:02:22.692 # Minimal, systemd-like check. 00:02:22.692 if [[ -e /.dockerenv ]]; then 00:02:22.692 # Clear garbage from the node'\''s name: 00:02:22.692 # agt-er_autotest_547-896 -> autotest_547-896 00:02:22.692 # $HOSTNAME is the actual container id 00:02:22.692 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:22.692 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:22.692 # We can assume this is a mount from a host where container is running, 00:02:22.692 # so fetch its hostname to easily identify the target swarm worker. 00:02:22.692 container="$(< /etc/hostname) ($agent)" 00:02:22.692 else 00:02:22.692 # Fallback 00:02:22.692 container=$agent 00:02:22.692 fi 00:02:22.692 fi 00:02:22.692 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:22.692 ' 00:02:22.706 [Pipeline] } 00:02:22.721 [Pipeline] // withEnv 00:02:22.729 [Pipeline] setCustomBuildProperty 00:02:22.743 [Pipeline] stage 00:02:22.746 [Pipeline] { (Tests) 00:02:22.762 [Pipeline] sh 00:02:23.045 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:23.320 [Pipeline] sh 00:02:23.603 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:23.619 [Pipeline] timeout 00:02:23.619 Timeout set to expire in 50 min 00:02:23.621 [Pipeline] { 00:02:23.635 [Pipeline] sh 00:02:23.918 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:24.491 HEAD is now at ede20dc4e lib/nvmf: Fix double free of connect request 00:02:24.535 [Pipeline] sh 00:02:24.818 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:25.094 [Pipeline] sh 00:02:25.383 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:25.401 [Pipeline] sh 00:02:25.685 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:25.946 ++ readlink -f spdk_repo 00:02:25.946 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:25.947 + [[ -n /home/vagrant/spdk_repo ]] 00:02:25.947 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:25.947 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:25.947 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:25.947 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:25.947 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:25.947 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:25.947 + cd /home/vagrant/spdk_repo 00:02:25.947 + source /etc/os-release 00:02:25.947 ++ NAME='Fedora Linux' 00:02:25.947 ++ VERSION='39 (Cloud Edition)' 00:02:25.947 ++ ID=fedora 00:02:25.947 ++ VERSION_ID=39 00:02:25.947 ++ VERSION_CODENAME= 00:02:25.947 ++ PLATFORM_ID=platform:f39 00:02:25.947 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:25.947 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:25.947 ++ LOGO=fedora-logo-icon 00:02:25.947 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:25.947 ++ HOME_URL=https://fedoraproject.org/ 00:02:25.947 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:25.947 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:25.947 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:25.947 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:25.947 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:25.947 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:25.947 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:25.947 ++ SUPPORT_END=2024-11-12 00:02:25.947 ++ VARIANT='Cloud Edition' 00:02:25.947 ++ VARIANT_ID=cloud 00:02:25.947 + uname -a 00:02:25.947 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:25.947 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:26.207 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:26.504 Hugepages 00:02:26.504 node hugesize free / total 00:02:26.504 node0 1048576kB 0 / 0 00:02:26.504 node0 2048kB 0 / 0 00:02:26.504 00:02:26.504 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:26.504 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:26.504 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:26.504 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:02:26.504 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:26.504 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:26.504 + rm -f /tmp/spdk-ld-path 00:02:26.504 + source autorun-spdk.conf 00:02:26.504 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:26.504 ++ SPDK_TEST_NVME=1 00:02:26.504 ++ SPDK_TEST_FTL=1 00:02:26.504 ++ SPDK_TEST_ISAL=1 00:02:26.504 ++ SPDK_RUN_ASAN=1 00:02:26.504 ++ SPDK_RUN_UBSAN=1 00:02:26.504 ++ SPDK_TEST_XNVME=1 00:02:26.504 ++ SPDK_TEST_NVME_FDP=1 00:02:26.504 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:26.504 ++ RUN_NIGHTLY=0 00:02:26.504 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:26.504 + [[ -n '' ]] 00:02:26.504 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:26.504 + for M in /var/spdk/build-*-manifest.txt 00:02:26.504 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:26.504 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:26.504 + for M in /var/spdk/build-*-manifest.txt 00:02:26.504 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:26.504 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:26.766 + for M in /var/spdk/build-*-manifest.txt 00:02:26.766 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:26.766 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:26.766 ++ uname 00:02:26.766 + [[ Linux == \L\i\n\u\x ]] 00:02:26.766 + sudo dmesg -T 00:02:26.766 + sudo dmesg --clear 00:02:26.766 + dmesg_pid=5019 00:02:26.766 + [[ Fedora Linux == FreeBSD ]] 00:02:26.766 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:26.766 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:26.766 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:26.766 + [[ -x /usr/src/fio-static/fio ]] 00:02:26.766 + sudo dmesg -Tw 00:02:26.766 + export FIO_BIN=/usr/src/fio-static/fio 00:02:26.766 + FIO_BIN=/usr/src/fio-static/fio 00:02:26.766 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:26.766 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:26.766 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:26.766 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:26.766 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:26.766 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:26.766 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:26.766 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:26.766 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:26.766 16:29:11 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:26.766 16:29:11 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:26.766 16:29:11 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:26.766 16:29:11 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:26.766 16:29:11 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:26.766 16:29:11 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:26.766 16:29:11 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:26.766 16:29:11 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:26.766 16:29:11 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:26.766 16:29:11 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:26.766 16:29:11 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:26.766 16:29:11 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:02:26.766 16:29:11 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:26.766 16:29:11 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:27.027 16:29:11 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:27.027 16:29:11 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:27.027 16:29:11 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:27.027 16:29:11 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:27.027 16:29:11 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:27.027 16:29:11 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:27.027 16:29:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.027 16:29:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.027 16:29:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.027 16:29:11 -- paths/export.sh@5 -- $ export PATH 00:02:27.028 16:29:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.028 16:29:11 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:27.028 16:29:11 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:27.028 16:29:11 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732120151.XXXXXX 00:02:27.028 16:29:11 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732120151.6bU943 00:02:27.028 16:29:11 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:27.028 16:29:11 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:27.028 16:29:11 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:27.028 16:29:11 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:27.028 16:29:11 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:27.028 16:29:11 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:27.028 16:29:11 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:27.028 16:29:11 -- common/autotest_common.sh@10 -- $ set +x 00:02:27.028 16:29:11 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:27.028 16:29:11 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:27.028 16:29:11 -- pm/common@17 -- $ local monitor 00:02:27.028 16:29:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.028 16:29:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.028 16:29:11 -- pm/common@25 -- $ sleep 1 00:02:27.028 16:29:11 -- pm/common@21 -- $ date +%s 00:02:27.028 16:29:11 -- pm/common@21 -- $ date +%s 00:02:27.028 16:29:11 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732120151 00:02:27.028 16:29:11 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732120151 00:02:27.028 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732120151_collect-cpu-load.pm.log 00:02:27.028 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732120151_collect-vmstat.pm.log 00:02:27.971 16:29:12 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:27.972 16:29:12 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:27.972 16:29:12 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:27.972 16:29:12 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:27.972 16:29:12 -- spdk/autobuild.sh@16 -- $ date -u 00:02:27.972 Wed Nov 20 04:29:12 PM UTC 2024 00:02:27.972 16:29:12 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:27.972 v25.01-pre-221-gede20dc4e 00:02:27.972 16:29:12 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:27.972 16:29:12 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:27.972 16:29:12 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:27.972 16:29:12 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:27.972 16:29:12 -- common/autotest_common.sh@10 -- $ set +x 00:02:27.972 ************************************ 00:02:27.972 START TEST asan 00:02:27.972 ************************************ 00:02:27.972 using asan 00:02:27.972 ************************************ 00:02:27.972 END TEST asan 00:02:27.972 ************************************ 00:02:27.972 16:29:12 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:27.972 00:02:27.972 real 0m0.000s 00:02:27.972 user 0m0.000s 00:02:27.972 sys 0m0.000s 00:02:27.972 16:29:12 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:27.972 16:29:12 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:27.972 16:29:12 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:27.972 16:29:12 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:27.972 16:29:12 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:27.972 16:29:12 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:27.972 16:29:12 -- common/autotest_common.sh@10 -- $ set +x 00:02:27.972 ************************************ 00:02:27.972 START TEST ubsan 00:02:27.972 ************************************ 00:02:27.972 using ubsan 00:02:27.972 16:29:12 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:27.972 00:02:27.972 real 0m0.000s 00:02:27.972 user 0m0.000s 00:02:27.972 sys 0m0.000s 00:02:27.972 16:29:12 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:27.972 ************************************ 00:02:27.972 END TEST ubsan 00:02:27.972 ************************************ 00:02:27.972 16:29:12 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:28.232 16:29:12 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:28.232 16:29:12 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:28.232 16:29:12 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:28.232 16:29:12 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:28.232 16:29:12 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:28.232 16:29:12 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:28.232 16:29:12 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:28.232 16:29:12 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:28.232 16:29:12 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:28.232 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:28.232 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:28.801 Using 'verbs' RDMA provider 00:02:39.374 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:51.584 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:51.584 Creating mk/config.mk...done. 00:02:51.584 Creating mk/cc.flags.mk...done. 00:02:51.584 Type 'make' to build. 00:02:51.584 16:29:35 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:51.584 16:29:35 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:51.584 16:29:35 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:51.584 16:29:35 -- common/autotest_common.sh@10 -- $ set +x 00:02:51.584 ************************************ 00:02:51.584 START TEST make 00:02:51.584 ************************************ 00:02:51.584 16:29:35 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:51.584 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:51.584 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:51.584 meson setup builddir \ 00:02:51.584 -Dwith-libaio=enabled \ 00:02:51.584 -Dwith-liburing=enabled \ 00:02:51.584 -Dwith-libvfn=disabled \ 00:02:51.584 -Dwith-spdk=disabled \ 00:02:51.584 -Dexamples=false \ 00:02:51.584 -Dtests=false \ 00:02:51.584 -Dtools=false && \ 00:02:51.584 meson compile -C builddir && \ 00:02:51.584 cd -) 00:02:51.584 make[1]: Nothing to be done for 'all'. 00:02:52.954 The Meson build system 00:02:52.954 Version: 1.5.0 00:02:52.954 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:52.954 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:52.954 Build type: native build 00:02:52.954 Project name: xnvme 00:02:52.954 Project version: 0.7.5 00:02:52.954 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:52.954 C linker for the host machine: cc ld.bfd 2.40-14 00:02:52.954 Host machine cpu family: x86_64 00:02:52.954 Host machine cpu: x86_64 00:02:52.954 Message: host_machine.system: linux 00:02:52.954 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:52.954 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:52.954 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:52.954 Run-time dependency threads found: YES 00:02:52.954 Has header "setupapi.h" : NO 00:02:52.954 Has header "linux/blkzoned.h" : YES 00:02:52.954 Has header "linux/blkzoned.h" : YES (cached) 00:02:52.954 Has header "libaio.h" : YES 00:02:52.954 Library aio found: YES 00:02:52.954 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:52.954 Run-time dependency liburing found: YES 2.2 00:02:52.954 Dependency libvfn skipped: feature with-libvfn disabled 00:02:52.954 Found CMake: /usr/bin/cmake (3.27.7) 00:02:52.954 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:52.954 Subproject spdk : skipped: feature with-spdk disabled 00:02:52.954 Run-time dependency appleframeworks found: NO (tried framework) 00:02:52.954 Run-time dependency appleframeworks found: NO (tried framework) 00:02:52.954 Library rt found: YES 00:02:52.954 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:52.954 Configuring xnvme_config.h using configuration 00:02:52.954 Configuring xnvme.spec using configuration 00:02:52.954 Run-time dependency bash-completion found: YES 2.11 00:02:52.954 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:52.954 Program cp found: YES (/usr/bin/cp) 00:02:52.954 Build targets in project: 3 00:02:52.954 00:02:52.954 xnvme 0.7.5 00:02:52.954 00:02:52.954 Subprojects 00:02:52.954 spdk : NO Feature 'with-spdk' disabled 00:02:52.954 00:02:52.954 User defined options 00:02:52.954 examples : false 00:02:52.954 tests : false 00:02:52.954 tools : false 00:02:52.954 with-libaio : enabled 00:02:52.954 with-liburing: enabled 00:02:52.954 with-libvfn : disabled 00:02:52.954 with-spdk : disabled 00:02:52.954 00:02:52.954 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:53.520 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:53.520 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:53.520 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:53.520 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:53.520 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:53.520 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:53.520 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:53.520 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:53.520 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:53.520 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:53.520 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:53.520 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:53.520 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:53.520 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:53.520 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:53.520 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:53.520 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:53.520 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:53.520 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:53.520 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:53.779 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:53.779 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:53.779 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:53.779 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:53.779 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:53.779 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:53.779 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:53.779 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:53.779 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:53.779 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:53.779 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:53.779 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:53.779 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:53.779 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:53.779 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:53.779 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:53.779 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:53.779 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:53.779 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:53.779 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:53.779 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:53.779 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:53.779 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:53.779 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:53.779 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:53.779 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:53.779 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:53.779 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:53.779 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:53.779 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:53.779 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:53.779 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:53.779 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:53.779 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:53.779 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:54.036 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:54.036 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:54.036 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:54.036 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:54.036 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:54.036 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:54.036 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:54.036 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:54.036 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:54.036 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:54.036 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:54.036 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:54.036 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:54.036 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:54.036 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:54.036 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:54.036 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:54.294 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:54.294 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:54.551 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:54.551 [75/76] Linking static target lib/libxnvme.a 00:02:54.551 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:54.551 INFO: autodetecting backend as ninja 00:02:54.551 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:54.551 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:01.106 The Meson build system 00:03:01.106 Version: 1.5.0 00:03:01.106 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:01.106 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:01.106 Build type: native build 00:03:01.106 Program cat found: YES (/usr/bin/cat) 00:03:01.106 Project name: DPDK 00:03:01.106 Project version: 24.03.0 00:03:01.106 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:01.106 C linker for the host machine: cc ld.bfd 2.40-14 00:03:01.106 Host machine cpu family: x86_64 00:03:01.106 Host machine cpu: x86_64 00:03:01.106 Message: ## Building in Developer Mode ## 00:03:01.106 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:01.106 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:01.106 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:01.106 Program python3 found: YES (/usr/bin/python3) 00:03:01.106 Program cat found: YES (/usr/bin/cat) 00:03:01.106 Compiler for C supports arguments -march=native: YES 00:03:01.106 Checking for size of "void *" : 8 00:03:01.106 Checking for size of "void *" : 8 (cached) 00:03:01.106 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:01.106 Library m found: YES 00:03:01.106 Library numa found: YES 00:03:01.106 Has header "numaif.h" : YES 00:03:01.106 Library fdt found: NO 00:03:01.106 Library execinfo found: NO 00:03:01.106 Has header "execinfo.h" : YES 00:03:01.106 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:01.106 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:01.106 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:01.106 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:01.106 Run-time dependency openssl found: YES 3.1.1 00:03:01.106 Run-time dependency libpcap found: YES 1.10.4 00:03:01.106 Has header "pcap.h" with dependency libpcap: YES 00:03:01.106 Compiler for C supports arguments -Wcast-qual: YES 00:03:01.106 Compiler for C supports arguments -Wdeprecated: YES 00:03:01.106 Compiler for C supports arguments -Wformat: YES 00:03:01.106 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:01.107 Compiler for C supports arguments -Wformat-security: NO 00:03:01.107 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:01.107 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:01.107 Compiler for C supports arguments -Wnested-externs: YES 00:03:01.107 Compiler for C supports arguments -Wold-style-definition: YES 00:03:01.107 Compiler for C supports arguments -Wpointer-arith: YES 00:03:01.107 Compiler for C supports arguments -Wsign-compare: YES 00:03:01.107 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:01.107 Compiler for C supports arguments -Wundef: YES 00:03:01.107 Compiler for C supports arguments -Wwrite-strings: YES 00:03:01.107 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:01.107 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:01.107 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:01.107 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:01.107 Program objdump found: YES (/usr/bin/objdump) 00:03:01.107 Compiler for C supports arguments -mavx512f: YES 00:03:01.107 Checking if "AVX512 checking" compiles: YES 00:03:01.107 Fetching value of define "__SSE4_2__" : 1 00:03:01.107 Fetching value of define "__AES__" : 1 00:03:01.107 Fetching value of define "__AVX__" : 1 00:03:01.107 Fetching value of define "__AVX2__" : 1 00:03:01.107 Fetching value of define "__AVX512BW__" : 1 00:03:01.107 Fetching value of define "__AVX512CD__" : 1 00:03:01.107 Fetching value of define "__AVX512DQ__" : 1 00:03:01.107 Fetching value of define "__AVX512F__" : 1 00:03:01.107 Fetching value of define "__AVX512VL__" : 1 00:03:01.107 Fetching value of define "__PCLMUL__" : 1 00:03:01.107 Fetching value of define "__RDRND__" : 1 00:03:01.107 Fetching value of define "__RDSEED__" : 1 00:03:01.107 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:01.107 Fetching value of define "__znver1__" : (undefined) 00:03:01.107 Fetching value of define "__znver2__" : (undefined) 00:03:01.107 Fetching value of define "__znver3__" : (undefined) 00:03:01.107 Fetching value of define "__znver4__" : (undefined) 00:03:01.107 Library asan found: YES 00:03:01.107 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:01.107 Message: lib/log: Defining dependency "log" 00:03:01.107 Message: lib/kvargs: Defining dependency "kvargs" 00:03:01.107 Message: lib/telemetry: Defining dependency "telemetry" 00:03:01.107 Library rt found: YES 00:03:01.107 Checking for function "getentropy" : NO 00:03:01.107 Message: lib/eal: Defining dependency "eal" 00:03:01.107 Message: lib/ring: Defining dependency "ring" 00:03:01.107 Message: lib/rcu: Defining dependency "rcu" 00:03:01.107 Message: lib/mempool: Defining dependency "mempool" 00:03:01.107 Message: lib/mbuf: Defining dependency "mbuf" 00:03:01.107 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:01.107 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:01.107 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:01.107 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:01.107 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:01.107 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:01.107 Compiler for C supports arguments -mpclmul: YES 00:03:01.107 Compiler for C supports arguments -maes: YES 00:03:01.107 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:01.107 Compiler for C supports arguments -mavx512bw: YES 00:03:01.107 Compiler for C supports arguments -mavx512dq: YES 00:03:01.107 Compiler for C supports arguments -mavx512vl: YES 00:03:01.107 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:01.107 Compiler for C supports arguments -mavx2: YES 00:03:01.107 Compiler for C supports arguments -mavx: YES 00:03:01.107 Message: lib/net: Defining dependency "net" 00:03:01.107 Message: lib/meter: Defining dependency "meter" 00:03:01.107 Message: lib/ethdev: Defining dependency "ethdev" 00:03:01.107 Message: lib/pci: Defining dependency "pci" 00:03:01.107 Message: lib/cmdline: Defining dependency "cmdline" 00:03:01.107 Message: lib/hash: Defining dependency "hash" 00:03:01.107 Message: lib/timer: Defining dependency "timer" 00:03:01.107 Message: lib/compressdev: Defining dependency "compressdev" 00:03:01.107 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:01.107 Message: lib/dmadev: Defining dependency "dmadev" 00:03:01.107 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:01.107 Message: lib/power: Defining dependency "power" 00:03:01.107 Message: lib/reorder: Defining dependency "reorder" 00:03:01.107 Message: lib/security: Defining dependency "security" 00:03:01.107 Has header "linux/userfaultfd.h" : YES 00:03:01.107 Has header "linux/vduse.h" : YES 00:03:01.107 Message: lib/vhost: Defining dependency "vhost" 00:03:01.107 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:01.107 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:01.107 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:01.107 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:01.107 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:01.107 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:01.107 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:01.107 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:01.107 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:01.107 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:01.107 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:01.107 Configuring doxy-api-html.conf using configuration 00:03:01.107 Configuring doxy-api-man.conf using configuration 00:03:01.107 Program mandb found: YES (/usr/bin/mandb) 00:03:01.107 Program sphinx-build found: NO 00:03:01.107 Configuring rte_build_config.h using configuration 00:03:01.107 Message: 00:03:01.107 ================= 00:03:01.107 Applications Enabled 00:03:01.107 ================= 00:03:01.107 00:03:01.107 apps: 00:03:01.107 00:03:01.107 00:03:01.107 Message: 00:03:01.107 ================= 00:03:01.107 Libraries Enabled 00:03:01.107 ================= 00:03:01.107 00:03:01.107 libs: 00:03:01.107 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:01.107 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:01.107 cryptodev, dmadev, power, reorder, security, vhost, 00:03:01.107 00:03:01.107 Message: 00:03:01.107 =============== 00:03:01.107 Drivers Enabled 00:03:01.107 =============== 00:03:01.107 00:03:01.107 common: 00:03:01.107 00:03:01.107 bus: 00:03:01.107 pci, vdev, 00:03:01.107 mempool: 00:03:01.107 ring, 00:03:01.107 dma: 00:03:01.107 00:03:01.107 net: 00:03:01.107 00:03:01.107 crypto: 00:03:01.107 00:03:01.107 compress: 00:03:01.107 00:03:01.107 vdpa: 00:03:01.107 00:03:01.107 00:03:01.107 Message: 00:03:01.107 ================= 00:03:01.107 Content Skipped 00:03:01.107 ================= 00:03:01.107 00:03:01.107 apps: 00:03:01.107 dumpcap: explicitly disabled via build config 00:03:01.107 graph: explicitly disabled via build config 00:03:01.107 pdump: explicitly disabled via build config 00:03:01.107 proc-info: explicitly disabled via build config 00:03:01.107 test-acl: explicitly disabled via build config 00:03:01.107 test-bbdev: explicitly disabled via build config 00:03:01.107 test-cmdline: explicitly disabled via build config 00:03:01.107 test-compress-perf: explicitly disabled via build config 00:03:01.107 test-crypto-perf: explicitly disabled via build config 00:03:01.107 test-dma-perf: explicitly disabled via build config 00:03:01.107 test-eventdev: explicitly disabled via build config 00:03:01.107 test-fib: explicitly disabled via build config 00:03:01.107 test-flow-perf: explicitly disabled via build config 00:03:01.107 test-gpudev: explicitly disabled via build config 00:03:01.107 test-mldev: explicitly disabled via build config 00:03:01.107 test-pipeline: explicitly disabled via build config 00:03:01.107 test-pmd: explicitly disabled via build config 00:03:01.107 test-regex: explicitly disabled via build config 00:03:01.107 test-sad: explicitly disabled via build config 00:03:01.107 test-security-perf: explicitly disabled via build config 00:03:01.107 00:03:01.107 libs: 00:03:01.107 argparse: explicitly disabled via build config 00:03:01.107 metrics: explicitly disabled via build config 00:03:01.107 acl: explicitly disabled via build config 00:03:01.107 bbdev: explicitly disabled via build config 00:03:01.107 bitratestats: explicitly disabled via build config 00:03:01.107 bpf: explicitly disabled via build config 00:03:01.107 cfgfile: explicitly disabled via build config 00:03:01.107 distributor: explicitly disabled via build config 00:03:01.107 efd: explicitly disabled via build config 00:03:01.107 eventdev: explicitly disabled via build config 00:03:01.107 dispatcher: explicitly disabled via build config 00:03:01.107 gpudev: explicitly disabled via build config 00:03:01.107 gro: explicitly disabled via build config 00:03:01.107 gso: explicitly disabled via build config 00:03:01.107 ip_frag: explicitly disabled via build config 00:03:01.107 jobstats: explicitly disabled via build config 00:03:01.107 latencystats: explicitly disabled via build config 00:03:01.107 lpm: explicitly disabled via build config 00:03:01.107 member: explicitly disabled via build config 00:03:01.107 pcapng: explicitly disabled via build config 00:03:01.107 rawdev: explicitly disabled via build config 00:03:01.107 regexdev: explicitly disabled via build config 00:03:01.107 mldev: explicitly disabled via build config 00:03:01.107 rib: explicitly disabled via build config 00:03:01.107 sched: explicitly disabled via build config 00:03:01.107 stack: explicitly disabled via build config 00:03:01.107 ipsec: explicitly disabled via build config 00:03:01.107 pdcp: explicitly disabled via build config 00:03:01.107 fib: explicitly disabled via build config 00:03:01.107 port: explicitly disabled via build config 00:03:01.107 pdump: explicitly disabled via build config 00:03:01.107 table: explicitly disabled via build config 00:03:01.107 pipeline: explicitly disabled via build config 00:03:01.107 graph: explicitly disabled via build config 00:03:01.107 node: explicitly disabled via build config 00:03:01.107 00:03:01.107 drivers: 00:03:01.107 common/cpt: not in enabled drivers build config 00:03:01.108 common/dpaax: not in enabled drivers build config 00:03:01.108 common/iavf: not in enabled drivers build config 00:03:01.108 common/idpf: not in enabled drivers build config 00:03:01.108 common/ionic: not in enabled drivers build config 00:03:01.108 common/mvep: not in enabled drivers build config 00:03:01.108 common/octeontx: not in enabled drivers build config 00:03:01.108 bus/auxiliary: not in enabled drivers build config 00:03:01.108 bus/cdx: not in enabled drivers build config 00:03:01.108 bus/dpaa: not in enabled drivers build config 00:03:01.108 bus/fslmc: not in enabled drivers build config 00:03:01.108 bus/ifpga: not in enabled drivers build config 00:03:01.108 bus/platform: not in enabled drivers build config 00:03:01.108 bus/uacce: not in enabled drivers build config 00:03:01.108 bus/vmbus: not in enabled drivers build config 00:03:01.108 common/cnxk: not in enabled drivers build config 00:03:01.108 common/mlx5: not in enabled drivers build config 00:03:01.108 common/nfp: not in enabled drivers build config 00:03:01.108 common/nitrox: not in enabled drivers build config 00:03:01.108 common/qat: not in enabled drivers build config 00:03:01.108 common/sfc_efx: not in enabled drivers build config 00:03:01.108 mempool/bucket: not in enabled drivers build config 00:03:01.108 mempool/cnxk: not in enabled drivers build config 00:03:01.108 mempool/dpaa: not in enabled drivers build config 00:03:01.108 mempool/dpaa2: not in enabled drivers build config 00:03:01.108 mempool/octeontx: not in enabled drivers build config 00:03:01.108 mempool/stack: not in enabled drivers build config 00:03:01.108 dma/cnxk: not in enabled drivers build config 00:03:01.108 dma/dpaa: not in enabled drivers build config 00:03:01.108 dma/dpaa2: not in enabled drivers build config 00:03:01.108 dma/hisilicon: not in enabled drivers build config 00:03:01.108 dma/idxd: not in enabled drivers build config 00:03:01.108 dma/ioat: not in enabled drivers build config 00:03:01.108 dma/skeleton: not in enabled drivers build config 00:03:01.108 net/af_packet: not in enabled drivers build config 00:03:01.108 net/af_xdp: not in enabled drivers build config 00:03:01.108 net/ark: not in enabled drivers build config 00:03:01.108 net/atlantic: not in enabled drivers build config 00:03:01.108 net/avp: not in enabled drivers build config 00:03:01.108 net/axgbe: not in enabled drivers build config 00:03:01.108 net/bnx2x: not in enabled drivers build config 00:03:01.108 net/bnxt: not in enabled drivers build config 00:03:01.108 net/bonding: not in enabled drivers build config 00:03:01.108 net/cnxk: not in enabled drivers build config 00:03:01.108 net/cpfl: not in enabled drivers build config 00:03:01.108 net/cxgbe: not in enabled drivers build config 00:03:01.108 net/dpaa: not in enabled drivers build config 00:03:01.108 net/dpaa2: not in enabled drivers build config 00:03:01.108 net/e1000: not in enabled drivers build config 00:03:01.108 net/ena: not in enabled drivers build config 00:03:01.108 net/enetc: not in enabled drivers build config 00:03:01.108 net/enetfec: not in enabled drivers build config 00:03:01.108 net/enic: not in enabled drivers build config 00:03:01.108 net/failsafe: not in enabled drivers build config 00:03:01.108 net/fm10k: not in enabled drivers build config 00:03:01.108 net/gve: not in enabled drivers build config 00:03:01.108 net/hinic: not in enabled drivers build config 00:03:01.108 net/hns3: not in enabled drivers build config 00:03:01.108 net/i40e: not in enabled drivers build config 00:03:01.108 net/iavf: not in enabled drivers build config 00:03:01.108 net/ice: not in enabled drivers build config 00:03:01.108 net/idpf: not in enabled drivers build config 00:03:01.108 net/igc: not in enabled drivers build config 00:03:01.108 net/ionic: not in enabled drivers build config 00:03:01.108 net/ipn3ke: not in enabled drivers build config 00:03:01.108 net/ixgbe: not in enabled drivers build config 00:03:01.108 net/mana: not in enabled drivers build config 00:03:01.108 net/memif: not in enabled drivers build config 00:03:01.108 net/mlx4: not in enabled drivers build config 00:03:01.108 net/mlx5: not in enabled drivers build config 00:03:01.108 net/mvneta: not in enabled drivers build config 00:03:01.108 net/mvpp2: not in enabled drivers build config 00:03:01.108 net/netvsc: not in enabled drivers build config 00:03:01.108 net/nfb: not in enabled drivers build config 00:03:01.108 net/nfp: not in enabled drivers build config 00:03:01.108 net/ngbe: not in enabled drivers build config 00:03:01.108 net/null: not in enabled drivers build config 00:03:01.108 net/octeontx: not in enabled drivers build config 00:03:01.108 net/octeon_ep: not in enabled drivers build config 00:03:01.108 net/pcap: not in enabled drivers build config 00:03:01.108 net/pfe: not in enabled drivers build config 00:03:01.108 net/qede: not in enabled drivers build config 00:03:01.108 net/ring: not in enabled drivers build config 00:03:01.108 net/sfc: not in enabled drivers build config 00:03:01.108 net/softnic: not in enabled drivers build config 00:03:01.108 net/tap: not in enabled drivers build config 00:03:01.108 net/thunderx: not in enabled drivers build config 00:03:01.108 net/txgbe: not in enabled drivers build config 00:03:01.108 net/vdev_netvsc: not in enabled drivers build config 00:03:01.108 net/vhost: not in enabled drivers build config 00:03:01.108 net/virtio: not in enabled drivers build config 00:03:01.108 net/vmxnet3: not in enabled drivers build config 00:03:01.108 raw/*: missing internal dependency, "rawdev" 00:03:01.108 crypto/armv8: not in enabled drivers build config 00:03:01.108 crypto/bcmfs: not in enabled drivers build config 00:03:01.108 crypto/caam_jr: not in enabled drivers build config 00:03:01.108 crypto/ccp: not in enabled drivers build config 00:03:01.108 crypto/cnxk: not in enabled drivers build config 00:03:01.108 crypto/dpaa_sec: not in enabled drivers build config 00:03:01.108 crypto/dpaa2_sec: not in enabled drivers build config 00:03:01.108 crypto/ipsec_mb: not in enabled drivers build config 00:03:01.108 crypto/mlx5: not in enabled drivers build config 00:03:01.108 crypto/mvsam: not in enabled drivers build config 00:03:01.108 crypto/nitrox: not in enabled drivers build config 00:03:01.108 crypto/null: not in enabled drivers build config 00:03:01.108 crypto/octeontx: not in enabled drivers build config 00:03:01.108 crypto/openssl: not in enabled drivers build config 00:03:01.108 crypto/scheduler: not in enabled drivers build config 00:03:01.108 crypto/uadk: not in enabled drivers build config 00:03:01.108 crypto/virtio: not in enabled drivers build config 00:03:01.108 compress/isal: not in enabled drivers build config 00:03:01.108 compress/mlx5: not in enabled drivers build config 00:03:01.108 compress/nitrox: not in enabled drivers build config 00:03:01.108 compress/octeontx: not in enabled drivers build config 00:03:01.108 compress/zlib: not in enabled drivers build config 00:03:01.108 regex/*: missing internal dependency, "regexdev" 00:03:01.108 ml/*: missing internal dependency, "mldev" 00:03:01.108 vdpa/ifc: not in enabled drivers build config 00:03:01.108 vdpa/mlx5: not in enabled drivers build config 00:03:01.108 vdpa/nfp: not in enabled drivers build config 00:03:01.108 vdpa/sfc: not in enabled drivers build config 00:03:01.108 event/*: missing internal dependency, "eventdev" 00:03:01.108 baseband/*: missing internal dependency, "bbdev" 00:03:01.108 gpu/*: missing internal dependency, "gpudev" 00:03:01.108 00:03:01.108 00:03:01.369 Build targets in project: 84 00:03:01.369 00:03:01.369 DPDK 24.03.0 00:03:01.369 00:03:01.369 User defined options 00:03:01.369 buildtype : debug 00:03:01.369 default_library : shared 00:03:01.369 libdir : lib 00:03:01.369 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:01.369 b_sanitize : address 00:03:01.369 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:01.369 c_link_args : 00:03:01.369 cpu_instruction_set: native 00:03:01.369 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:01.369 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:01.369 enable_docs : false 00:03:01.369 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:01.369 enable_kmods : false 00:03:01.369 max_lcores : 128 00:03:01.369 tests : false 00:03:01.369 00:03:01.369 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:01.936 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:01.936 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:01.936 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:01.936 [3/267] Linking static target lib/librte_kvargs.a 00:03:01.936 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:01.936 [5/267] Linking static target lib/librte_log.a 00:03:01.936 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:02.194 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:02.194 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:02.194 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:02.452 [10/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:02.452 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:02.452 [12/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.452 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:02.452 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:02.452 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:02.452 [16/267] Linking static target lib/librte_telemetry.a 00:03:02.452 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:02.452 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:02.711 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:02.711 [20/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.711 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:02.711 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:02.711 [23/267] Linking target lib/librte_log.so.24.1 00:03:02.711 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:02.711 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:02.970 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:02.970 [27/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:02.970 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:02.970 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:02.970 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:02.970 [31/267] Linking target lib/librte_kvargs.so.24.1 00:03:03.229 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:03.229 [33/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.229 [34/267] Linking target lib/librte_telemetry.so.24.1 00:03:03.229 [35/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:03.229 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:03.229 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:03.229 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:03.229 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:03.229 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:03.487 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:03.487 [42/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:03.487 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:03.487 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:03.487 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:03.487 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:03.487 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:03.487 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:03.487 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:03.746 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:03.746 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:03.746 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:03.746 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:04.004 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:04.004 [55/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:04.004 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:04.004 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:04.004 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:04.004 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:04.004 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:04.004 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:04.004 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:04.004 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:04.262 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:04.262 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:04.262 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:04.262 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:04.262 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:04.520 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:04.520 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:04.520 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:04.520 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:04.520 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:04.520 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:04.520 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:04.520 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:04.777 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:04.777 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:04.777 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:04.777 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:04.777 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:04.777 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:05.036 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:05.036 [84/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:05.036 [85/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:05.036 [86/267] Linking static target lib/librte_eal.a 00:03:05.036 [87/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:05.036 [88/267] Linking static target lib/librte_ring.a 00:03:05.036 [89/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:05.036 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:05.036 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:05.294 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:05.294 [93/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:05.294 [94/267] Linking static target lib/librte_mempool.a 00:03:05.294 [95/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:05.294 [96/267] Linking static target lib/librte_rcu.a 00:03:05.294 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:05.553 [98/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:05.553 [99/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:05.553 [100/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.553 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:05.553 [102/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:05.553 [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:05.553 [104/267] Linking static target lib/librte_mbuf.a 00:03:05.553 [105/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:05.812 [106/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.812 [107/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:05.812 [108/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:05.812 [109/267] Linking static target lib/librte_meter.a 00:03:05.812 [110/267] Linking static target lib/librte_net.a 00:03:05.812 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:06.071 [112/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.071 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:06.071 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:06.071 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:06.071 [116/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.071 [117/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.330 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:06.330 [119/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.330 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:06.589 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:06.589 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:06.589 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:06.847 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:06.847 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:06.847 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:06.847 [127/267] Linking static target lib/librte_pci.a 00:03:06.847 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:06.847 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:06.847 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:06.847 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:06.847 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:07.106 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:07.106 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:07.106 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:07.106 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:07.106 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:07.106 [138/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.106 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:07.106 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:07.106 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:07.106 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:07.106 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:07.106 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:07.365 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:07.365 [146/267] Linking static target lib/librte_cmdline.a 00:03:07.365 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:07.365 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:07.365 [149/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:07.365 [150/267] Linking static target lib/librte_timer.a 00:03:07.365 [151/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:07.624 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:07.624 [153/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:07.624 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:07.624 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:07.884 [156/267] Linking static target lib/librte_ethdev.a 00:03:07.884 [157/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:07.884 [158/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:07.884 [159/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:07.884 [160/267] Linking static target lib/librte_compressdev.a 00:03:07.884 [161/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:07.884 [162/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.884 [163/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:08.144 [164/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:08.144 [165/267] Linking static target lib/librte_dmadev.a 00:03:08.144 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:08.144 [167/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:08.144 [168/267] Linking static target lib/librte_hash.a 00:03:08.407 [169/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:08.407 [170/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:08.407 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:08.407 [172/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:08.407 [173/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.407 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.666 [175/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:08.666 [176/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:08.666 [177/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:08.666 [178/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.923 [179/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:08.923 [180/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:08.923 [181/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:08.923 [182/267] Linking static target lib/librte_cryptodev.a 00:03:08.923 [183/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:08.924 [184/267] Linking static target lib/librte_power.a 00:03:09.182 [185/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.182 [186/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:09.182 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:09.182 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:09.182 [189/267] Linking static target lib/librte_reorder.a 00:03:09.182 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:09.182 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:09.182 [192/267] Linking static target lib/librte_security.a 00:03:09.441 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.701 [194/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.701 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.701 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:09.959 [197/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:09.959 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:09.959 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:10.217 [200/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:10.217 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:10.217 [202/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:10.217 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:10.217 [204/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:10.217 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:10.475 [206/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:10.475 [207/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:10.475 [208/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:10.475 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:10.475 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.734 [211/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:10.734 [212/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:10.734 [213/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:10.734 [214/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:10.734 [215/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:10.734 [216/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:10.734 [217/267] Linking static target drivers/librte_bus_vdev.a 00:03:10.734 [218/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:10.734 [219/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:10.734 [220/267] Linking static target drivers/librte_bus_pci.a 00:03:10.734 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:10.734 [222/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:10.734 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:11.018 [224/267] Linking static target drivers/librte_mempool_ring.a 00:03:11.018 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.018 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.586 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:12.521 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.521 [229/267] Linking target lib/librte_eal.so.24.1 00:03:12.521 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:12.521 [231/267] Linking target lib/librte_pci.so.24.1 00:03:12.521 [232/267] Linking target lib/librte_meter.so.24.1 00:03:12.521 [233/267] Linking target lib/librte_ring.so.24.1 00:03:12.521 [234/267] Linking target lib/librte_dmadev.so.24.1 00:03:12.521 [235/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:12.521 [236/267] Linking target lib/librte_timer.so.24.1 00:03:12.781 [237/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:12.781 [238/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:12.781 [239/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:12.781 [240/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:12.781 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:12.781 [242/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:12.781 [243/267] Linking target lib/librte_rcu.so.24.1 00:03:12.781 [244/267] Linking target lib/librte_mempool.so.24.1 00:03:12.781 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:12.781 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:13.039 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:13.039 [248/267] Linking target lib/librte_mbuf.so.24.1 00:03:13.039 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:13.039 [250/267] Linking target lib/librte_cryptodev.so.24.1 00:03:13.039 [251/267] Linking target lib/librte_compressdev.so.24.1 00:03:13.039 [252/267] Linking target lib/librte_net.so.24.1 00:03:13.039 [253/267] Linking target lib/librte_reorder.so.24.1 00:03:13.039 [254/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:13.039 [255/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:13.297 [256/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.297 [257/267] Linking target lib/librte_security.so.24.1 00:03:13.297 [258/267] Linking target lib/librte_cmdline.so.24.1 00:03:13.297 [259/267] Linking target lib/librte_hash.so.24.1 00:03:13.297 [260/267] Linking target lib/librte_ethdev.so.24.1 00:03:13.297 [261/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:13.297 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:13.297 [263/267] Linking target lib/librte_power.so.24.1 00:03:14.236 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:14.236 [265/267] Linking static target lib/librte_vhost.a 00:03:15.610 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.610 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:15.610 INFO: autodetecting backend as ninja 00:03:15.610 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:30.492 CC lib/ut/ut.o 00:03:30.492 CC lib/log/log_flags.o 00:03:30.492 CC lib/log/log_deprecated.o 00:03:30.492 CC lib/log/log.o 00:03:30.492 CC lib/ut_mock/mock.o 00:03:30.492 LIB libspdk_log.a 00:03:30.492 LIB libspdk_ut.a 00:03:30.492 LIB libspdk_ut_mock.a 00:03:30.492 SO libspdk_ut.so.2.0 00:03:30.492 SO libspdk_ut_mock.so.6.0 00:03:30.492 SO libspdk_log.so.7.1 00:03:30.492 SYMLINK libspdk_ut.so 00:03:30.492 SYMLINK libspdk_ut_mock.so 00:03:30.492 SYMLINK libspdk_log.so 00:03:30.492 CC lib/dma/dma.o 00:03:30.492 CC lib/ioat/ioat.o 00:03:30.492 CXX lib/trace_parser/trace.o 00:03:30.492 CC lib/util/base64.o 00:03:30.492 CC lib/util/cpuset.o 00:03:30.492 CC lib/util/bit_array.o 00:03:30.492 CC lib/util/crc16.o 00:03:30.492 CC lib/util/crc32.o 00:03:30.492 CC lib/util/crc32c.o 00:03:30.492 CC lib/vfio_user/host/vfio_user_pci.o 00:03:30.492 CC lib/util/crc32_ieee.o 00:03:30.492 CC lib/vfio_user/host/vfio_user.o 00:03:30.492 CC lib/util/crc64.o 00:03:30.492 CC lib/util/dif.o 00:03:30.492 LIB libspdk_dma.a 00:03:30.492 SO libspdk_dma.so.5.0 00:03:30.492 CC lib/util/fd.o 00:03:30.492 CC lib/util/fd_group.o 00:03:30.492 CC lib/util/file.o 00:03:30.492 SYMLINK libspdk_dma.so 00:03:30.492 CC lib/util/hexlify.o 00:03:30.492 CC lib/util/iov.o 00:03:30.492 LIB libspdk_ioat.a 00:03:30.492 CC lib/util/math.o 00:03:30.492 CC lib/util/net.o 00:03:30.492 SO libspdk_ioat.so.7.0 00:03:30.492 LIB libspdk_vfio_user.a 00:03:30.492 SO libspdk_vfio_user.so.5.0 00:03:30.492 SYMLINK libspdk_ioat.so 00:03:30.492 CC lib/util/pipe.o 00:03:30.492 CC lib/util/strerror_tls.o 00:03:30.492 CC lib/util/string.o 00:03:30.492 SYMLINK libspdk_vfio_user.so 00:03:30.492 CC lib/util/uuid.o 00:03:30.492 CC lib/util/xor.o 00:03:30.492 CC lib/util/zipf.o 00:03:30.492 CC lib/util/md5.o 00:03:30.492 LIB libspdk_util.a 00:03:30.492 SO libspdk_util.so.10.1 00:03:30.492 LIB libspdk_trace_parser.a 00:03:30.492 SO libspdk_trace_parser.so.6.0 00:03:30.492 SYMLINK libspdk_util.so 00:03:30.492 SYMLINK libspdk_trace_parser.so 00:03:30.492 CC lib/conf/conf.o 00:03:30.492 CC lib/env_dpdk/env.o 00:03:30.492 CC lib/vmd/vmd.o 00:03:30.492 CC lib/env_dpdk/memory.o 00:03:30.492 CC lib/env_dpdk/pci.o 00:03:30.492 CC lib/vmd/led.o 00:03:30.492 CC lib/env_dpdk/init.o 00:03:30.492 CC lib/json/json_parse.o 00:03:30.492 CC lib/idxd/idxd.o 00:03:30.492 CC lib/rdma_utils/rdma_utils.o 00:03:30.492 CC lib/json/json_util.o 00:03:30.492 LIB libspdk_rdma_utils.a 00:03:30.492 LIB libspdk_conf.a 00:03:30.492 CC lib/json/json_write.o 00:03:30.492 SO libspdk_conf.so.6.0 00:03:30.492 SO libspdk_rdma_utils.so.1.0 00:03:30.492 SYMLINK libspdk_conf.so 00:03:30.492 CC lib/idxd/idxd_user.o 00:03:30.492 SYMLINK libspdk_rdma_utils.so 00:03:30.492 CC lib/idxd/idxd_kernel.o 00:03:30.492 CC lib/env_dpdk/threads.o 00:03:30.492 CC lib/env_dpdk/pci_ioat.o 00:03:30.492 CC lib/env_dpdk/pci_virtio.o 00:03:30.492 CC lib/env_dpdk/pci_vmd.o 00:03:30.492 CC lib/env_dpdk/pci_idxd.o 00:03:30.492 CC lib/env_dpdk/pci_event.o 00:03:30.492 CC lib/env_dpdk/sigbus_handler.o 00:03:30.492 CC lib/env_dpdk/pci_dpdk.o 00:03:30.492 LIB libspdk_json.a 00:03:30.492 SO libspdk_json.so.6.0 00:03:30.492 SYMLINK libspdk_json.so 00:03:30.492 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:30.492 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:30.492 LIB libspdk_idxd.a 00:03:30.492 LIB libspdk_vmd.a 00:03:30.492 SO libspdk_idxd.so.12.1 00:03:30.492 SO libspdk_vmd.so.6.0 00:03:30.492 CC lib/jsonrpc/jsonrpc_server.o 00:03:30.492 SYMLINK libspdk_idxd.so 00:03:30.492 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:30.492 CC lib/jsonrpc/jsonrpc_client.o 00:03:30.492 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:30.492 CC lib/rdma_provider/common.o 00:03:30.492 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:30.492 SYMLINK libspdk_vmd.so 00:03:30.751 LIB libspdk_rdma_provider.a 00:03:30.751 LIB libspdk_jsonrpc.a 00:03:30.751 SO libspdk_rdma_provider.so.7.0 00:03:30.751 SO libspdk_jsonrpc.so.6.0 00:03:30.751 SYMLINK libspdk_rdma_provider.so 00:03:30.751 SYMLINK libspdk_jsonrpc.so 00:03:31.009 LIB libspdk_env_dpdk.a 00:03:31.009 SO libspdk_env_dpdk.so.15.1 00:03:31.009 CC lib/rpc/rpc.o 00:03:31.009 SYMLINK libspdk_env_dpdk.so 00:03:31.267 LIB libspdk_rpc.a 00:03:31.267 SO libspdk_rpc.so.6.0 00:03:31.267 SYMLINK libspdk_rpc.so 00:03:31.525 CC lib/trace/trace_flags.o 00:03:31.525 CC lib/notify/notify_rpc.o 00:03:31.525 CC lib/keyring/keyring.o 00:03:31.525 CC lib/trace/trace.o 00:03:31.525 CC lib/notify/notify.o 00:03:31.525 CC lib/trace/trace_rpc.o 00:03:31.525 CC lib/keyring/keyring_rpc.o 00:03:31.525 LIB libspdk_notify.a 00:03:31.525 SO libspdk_notify.so.6.0 00:03:31.525 LIB libspdk_keyring.a 00:03:31.525 LIB libspdk_trace.a 00:03:31.525 SO libspdk_keyring.so.2.0 00:03:31.525 SYMLINK libspdk_notify.so 00:03:31.783 SO libspdk_trace.so.11.0 00:03:31.783 SYMLINK libspdk_keyring.so 00:03:31.783 SYMLINK libspdk_trace.so 00:03:32.041 CC lib/thread/thread.o 00:03:32.041 CC lib/thread/iobuf.o 00:03:32.041 CC lib/sock/sock.o 00:03:32.041 CC lib/sock/sock_rpc.o 00:03:32.299 LIB libspdk_sock.a 00:03:32.299 SO libspdk_sock.so.10.0 00:03:32.299 SYMLINK libspdk_sock.so 00:03:32.563 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:32.563 CC lib/nvme/nvme_ctrlr.o 00:03:32.563 CC lib/nvme/nvme_fabric.o 00:03:32.563 CC lib/nvme/nvme_ns_cmd.o 00:03:32.563 CC lib/nvme/nvme_qpair.o 00:03:32.563 CC lib/nvme/nvme_pcie.o 00:03:32.563 CC lib/nvme/nvme_ns.o 00:03:32.563 CC lib/nvme/nvme.o 00:03:32.563 CC lib/nvme/nvme_pcie_common.o 00:03:33.130 LIB libspdk_thread.a 00:03:33.130 CC lib/nvme/nvme_quirks.o 00:03:33.130 SO libspdk_thread.so.11.0 00:03:33.130 SYMLINK libspdk_thread.so 00:03:33.130 CC lib/nvme/nvme_transport.o 00:03:33.130 CC lib/nvme/nvme_discovery.o 00:03:33.388 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:33.388 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:33.388 CC lib/nvme/nvme_tcp.o 00:03:33.388 CC lib/nvme/nvme_opal.o 00:03:33.388 CC lib/nvme/nvme_io_msg.o 00:03:33.388 CC lib/nvme/nvme_poll_group.o 00:03:33.388 CC lib/nvme/nvme_zns.o 00:03:33.646 CC lib/nvme/nvme_stubs.o 00:03:33.646 CC lib/nvme/nvme_auth.o 00:03:33.646 CC lib/nvme/nvme_cuse.o 00:03:33.904 CC lib/nvme/nvme_rdma.o 00:03:34.161 CC lib/blob/blobstore.o 00:03:34.161 CC lib/accel/accel.o 00:03:34.161 CC lib/init/json_config.o 00:03:34.161 CC lib/virtio/virtio.o 00:03:34.161 CC lib/fsdev/fsdev.o 00:03:34.432 CC lib/init/subsystem.o 00:03:34.432 CC lib/fsdev/fsdev_io.o 00:03:34.432 CC lib/fsdev/fsdev_rpc.o 00:03:34.432 CC lib/init/subsystem_rpc.o 00:03:34.432 CC lib/virtio/virtio_vhost_user.o 00:03:34.432 CC lib/blob/request.o 00:03:34.691 CC lib/accel/accel_rpc.o 00:03:34.691 CC lib/init/rpc.o 00:03:34.691 LIB libspdk_init.a 00:03:34.691 CC lib/virtio/virtio_vfio_user.o 00:03:34.691 SO libspdk_init.so.6.0 00:03:34.691 CC lib/virtio/virtio_pci.o 00:03:34.691 SYMLINK libspdk_init.so 00:03:34.691 LIB libspdk_fsdev.a 00:03:34.691 CC lib/blob/zeroes.o 00:03:34.691 SO libspdk_fsdev.so.2.0 00:03:34.949 CC lib/accel/accel_sw.o 00:03:34.949 SYMLINK libspdk_fsdev.so 00:03:34.950 CC lib/blob/blob_bs_dev.o 00:03:34.950 CC lib/event/app.o 00:03:34.950 CC lib/event/reactor.o 00:03:34.950 LIB libspdk_nvme.a 00:03:34.950 CC lib/event/log_rpc.o 00:03:34.950 CC lib/event/app_rpc.o 00:03:34.950 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:34.950 LIB libspdk_virtio.a 00:03:34.950 SO libspdk_nvme.so.15.0 00:03:34.950 CC lib/event/scheduler_static.o 00:03:35.207 SO libspdk_virtio.so.7.0 00:03:35.207 SYMLINK libspdk_virtio.so 00:03:35.207 LIB libspdk_accel.a 00:03:35.207 SO libspdk_accel.so.16.0 00:03:35.207 SYMLINK libspdk_nvme.so 00:03:35.207 SYMLINK libspdk_accel.so 00:03:35.464 LIB libspdk_event.a 00:03:35.464 SO libspdk_event.so.14.0 00:03:35.464 CC lib/bdev/bdev_rpc.o 00:03:35.464 CC lib/bdev/bdev.o 00:03:35.464 CC lib/bdev/bdev_zone.o 00:03:35.464 CC lib/bdev/part.o 00:03:35.464 CC lib/bdev/scsi_nvme.o 00:03:35.464 SYMLINK libspdk_event.so 00:03:35.464 LIB libspdk_fuse_dispatcher.a 00:03:35.464 SO libspdk_fuse_dispatcher.so.1.0 00:03:35.722 SYMLINK libspdk_fuse_dispatcher.so 00:03:37.094 LIB libspdk_blob.a 00:03:37.094 SO libspdk_blob.so.11.0 00:03:37.094 SYMLINK libspdk_blob.so 00:03:37.094 CC lib/blobfs/blobfs.o 00:03:37.094 CC lib/blobfs/tree.o 00:03:37.094 CC lib/lvol/lvol.o 00:03:37.742 LIB libspdk_bdev.a 00:03:38.000 SO libspdk_bdev.so.17.0 00:03:38.000 SYMLINK libspdk_bdev.so 00:03:38.000 LIB libspdk_blobfs.a 00:03:38.000 SO libspdk_blobfs.so.10.0 00:03:38.000 LIB libspdk_lvol.a 00:03:38.257 CC lib/ftl/ftl_core.o 00:03:38.257 CC lib/nbd/nbd.o 00:03:38.257 CC lib/ftl/ftl_init.o 00:03:38.257 CC lib/nbd/nbd_rpc.o 00:03:38.257 CC lib/ftl/ftl_layout.o 00:03:38.257 CC lib/nvmf/ctrlr.o 00:03:38.257 CC lib/scsi/dev.o 00:03:38.257 CC lib/ublk/ublk.o 00:03:38.257 SO libspdk_lvol.so.10.0 00:03:38.257 SYMLINK libspdk_blobfs.so 00:03:38.257 CC lib/ublk/ublk_rpc.o 00:03:38.257 SYMLINK libspdk_lvol.so 00:03:38.257 CC lib/scsi/lun.o 00:03:38.257 CC lib/ftl/ftl_debug.o 00:03:38.257 CC lib/ftl/ftl_io.o 00:03:38.257 CC lib/ftl/ftl_sb.o 00:03:38.257 CC lib/scsi/port.o 00:03:38.515 LIB libspdk_nbd.a 00:03:38.515 CC lib/scsi/scsi.o 00:03:38.515 CC lib/scsi/scsi_bdev.o 00:03:38.515 CC lib/scsi/scsi_pr.o 00:03:38.515 CC lib/ftl/ftl_l2p.o 00:03:38.515 SO libspdk_nbd.so.7.0 00:03:38.515 CC lib/scsi/scsi_rpc.o 00:03:38.515 CC lib/nvmf/ctrlr_discovery.o 00:03:38.515 CC lib/nvmf/ctrlr_bdev.o 00:03:38.515 SYMLINK libspdk_nbd.so 00:03:38.515 CC lib/nvmf/subsystem.o 00:03:38.515 CC lib/scsi/task.o 00:03:38.515 CC lib/nvmf/nvmf.o 00:03:38.515 LIB libspdk_ublk.a 00:03:38.773 CC lib/ftl/ftl_l2p_flat.o 00:03:38.773 SO libspdk_ublk.so.3.0 00:03:38.773 SYMLINK libspdk_ublk.so 00:03:38.773 CC lib/nvmf/nvmf_rpc.o 00:03:38.773 CC lib/nvmf/transport.o 00:03:38.773 CC lib/ftl/ftl_nv_cache.o 00:03:38.773 CC lib/ftl/ftl_band.o 00:03:39.032 LIB libspdk_scsi.a 00:03:39.032 CC lib/nvmf/tcp.o 00:03:39.032 SO libspdk_scsi.so.9.0 00:03:39.032 SYMLINK libspdk_scsi.so 00:03:39.289 CC lib/ftl/ftl_band_ops.o 00:03:39.289 CC lib/ftl/ftl_writer.o 00:03:39.289 CC lib/iscsi/conn.o 00:03:39.289 CC lib/iscsi/init_grp.o 00:03:39.289 CC lib/vhost/vhost.o 00:03:39.548 CC lib/vhost/vhost_rpc.o 00:03:39.548 CC lib/vhost/vhost_scsi.o 00:03:39.548 CC lib/nvmf/stubs.o 00:03:39.548 CC lib/ftl/ftl_rq.o 00:03:39.548 CC lib/ftl/ftl_reloc.o 00:03:39.548 CC lib/nvmf/mdns_server.o 00:03:39.548 CC lib/iscsi/iscsi.o 00:03:39.806 CC lib/ftl/ftl_l2p_cache.o 00:03:39.806 CC lib/iscsi/param.o 00:03:39.806 CC lib/iscsi/portal_grp.o 00:03:39.806 CC lib/iscsi/tgt_node.o 00:03:39.806 CC lib/iscsi/iscsi_subsystem.o 00:03:40.065 CC lib/ftl/ftl_p2l.o 00:03:40.065 CC lib/ftl/ftl_p2l_log.o 00:03:40.065 CC lib/ftl/mngt/ftl_mngt.o 00:03:40.065 CC lib/nvmf/rdma.o 00:03:40.065 CC lib/vhost/vhost_blk.o 00:03:40.323 CC lib/vhost/rte_vhost_user.o 00:03:40.323 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:40.323 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:40.323 CC lib/nvmf/auth.o 00:03:40.323 CC lib/iscsi/iscsi_rpc.o 00:03:40.323 CC lib/iscsi/task.o 00:03:40.323 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:40.581 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:40.581 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:40.581 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:40.581 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:40.839 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:40.839 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:40.839 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:40.839 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:40.839 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:40.839 CC lib/ftl/utils/ftl_conf.o 00:03:40.839 CC lib/ftl/utils/ftl_md.o 00:03:40.839 LIB libspdk_iscsi.a 00:03:40.839 CC lib/ftl/utils/ftl_mempool.o 00:03:40.839 CC lib/ftl/utils/ftl_bitmap.o 00:03:40.839 CC lib/ftl/utils/ftl_property.o 00:03:40.839 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:41.097 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:41.097 SO libspdk_iscsi.so.8.0 00:03:41.097 LIB libspdk_vhost.a 00:03:41.097 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:41.097 SO libspdk_vhost.so.8.0 00:03:41.097 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:41.097 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:41.097 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:41.097 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:41.097 SYMLINK libspdk_iscsi.so 00:03:41.097 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:41.097 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:41.097 SYMLINK libspdk_vhost.so 00:03:41.097 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:41.097 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:41.355 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:41.355 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:41.355 CC lib/ftl/base/ftl_base_dev.o 00:03:41.355 CC lib/ftl/base/ftl_base_bdev.o 00:03:41.355 CC lib/ftl/ftl_trace.o 00:03:41.614 LIB libspdk_ftl.a 00:03:41.614 SO libspdk_ftl.so.9.0 00:03:41.871 SYMLINK libspdk_ftl.so 00:03:42.438 LIB libspdk_nvmf.a 00:03:42.438 SO libspdk_nvmf.so.20.0 00:03:42.438 SYMLINK libspdk_nvmf.so 00:03:43.004 CC module/env_dpdk/env_dpdk_rpc.o 00:03:43.004 CC module/fsdev/aio/fsdev_aio.o 00:03:43.004 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:43.004 CC module/blob/bdev/blob_bdev.o 00:03:43.004 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:43.004 CC module/keyring/file/keyring.o 00:03:43.004 CC module/accel/ioat/accel_ioat.o 00:03:43.004 CC module/accel/error/accel_error.o 00:03:43.004 CC module/scheduler/gscheduler/gscheduler.o 00:03:43.004 CC module/sock/posix/posix.o 00:03:43.004 LIB libspdk_env_dpdk_rpc.a 00:03:43.004 SO libspdk_env_dpdk_rpc.so.6.0 00:03:43.004 CC module/keyring/file/keyring_rpc.o 00:03:43.004 SYMLINK libspdk_env_dpdk_rpc.so 00:03:43.004 CC module/accel/error/accel_error_rpc.o 00:03:43.004 LIB libspdk_scheduler_dpdk_governor.a 00:03:43.004 LIB libspdk_scheduler_gscheduler.a 00:03:43.004 LIB libspdk_scheduler_dynamic.a 00:03:43.004 SO libspdk_scheduler_gscheduler.so.4.0 00:03:43.004 SO libspdk_scheduler_dynamic.so.4.0 00:03:43.004 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:43.004 CC module/accel/ioat/accel_ioat_rpc.o 00:03:43.004 LIB libspdk_blob_bdev.a 00:03:43.004 SYMLINK libspdk_scheduler_dynamic.so 00:03:43.004 LIB libspdk_keyring_file.a 00:03:43.004 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:43.004 SYMLINK libspdk_scheduler_gscheduler.so 00:03:43.004 CC module/fsdev/aio/linux_aio_mgr.o 00:03:43.004 SO libspdk_blob_bdev.so.11.0 00:03:43.004 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:43.004 SO libspdk_keyring_file.so.2.0 00:03:43.262 LIB libspdk_accel_error.a 00:03:43.262 SO libspdk_accel_error.so.2.0 00:03:43.262 SYMLINK libspdk_keyring_file.so 00:03:43.262 SYMLINK libspdk_blob_bdev.so 00:03:43.262 CC module/keyring/linux/keyring.o 00:03:43.262 LIB libspdk_accel_ioat.a 00:03:43.262 CC module/keyring/linux/keyring_rpc.o 00:03:43.262 SYMLINK libspdk_accel_error.so 00:03:43.262 SO libspdk_accel_ioat.so.6.0 00:03:43.262 CC module/accel/dsa/accel_dsa.o 00:03:43.262 SYMLINK libspdk_accel_ioat.so 00:03:43.262 CC module/accel/iaa/accel_iaa.o 00:03:43.262 LIB libspdk_keyring_linux.a 00:03:43.262 SO libspdk_keyring_linux.so.1.0 00:03:43.262 CC module/bdev/delay/vbdev_delay.o 00:03:43.520 SYMLINK libspdk_keyring_linux.so 00:03:43.520 CC module/bdev/error/vbdev_error.o 00:03:43.520 CC module/accel/iaa/accel_iaa_rpc.o 00:03:43.520 CC module/blobfs/bdev/blobfs_bdev.o 00:03:43.520 CC module/bdev/gpt/gpt.o 00:03:43.520 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:43.520 CC module/bdev/lvol/vbdev_lvol.o 00:03:43.520 CC module/accel/dsa/accel_dsa_rpc.o 00:03:43.520 LIB libspdk_accel_iaa.a 00:03:43.520 SO libspdk_accel_iaa.so.3.0 00:03:43.520 LIB libspdk_fsdev_aio.a 00:03:43.520 LIB libspdk_sock_posix.a 00:03:43.520 SO libspdk_fsdev_aio.so.1.0 00:03:43.520 SYMLINK libspdk_accel_iaa.so 00:03:43.520 CC module/bdev/error/vbdev_error_rpc.o 00:03:43.520 SO libspdk_sock_posix.so.6.0 00:03:43.520 LIB libspdk_blobfs_bdev.a 00:03:43.520 CC module/bdev/gpt/vbdev_gpt.o 00:03:43.520 SO libspdk_blobfs_bdev.so.6.0 00:03:43.778 SYMLINK libspdk_fsdev_aio.so 00:03:43.778 LIB libspdk_accel_dsa.a 00:03:43.778 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:43.778 SO libspdk_accel_dsa.so.5.0 00:03:43.778 SYMLINK libspdk_sock_posix.so 00:03:43.778 SYMLINK libspdk_blobfs_bdev.so 00:03:43.778 LIB libspdk_bdev_error.a 00:03:43.778 CC module/bdev/malloc/bdev_malloc.o 00:03:43.778 SYMLINK libspdk_accel_dsa.so 00:03:43.778 SO libspdk_bdev_error.so.6.0 00:03:43.778 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:43.778 CC module/bdev/null/bdev_null.o 00:03:43.778 SYMLINK libspdk_bdev_error.so 00:03:43.778 LIB libspdk_bdev_delay.a 00:03:43.778 CC module/bdev/passthru/vbdev_passthru.o 00:03:43.778 CC module/bdev/nvme/bdev_nvme.o 00:03:43.778 SO libspdk_bdev_delay.so.6.0 00:03:43.778 CC module/bdev/raid/bdev_raid.o 00:03:43.778 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:43.778 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:43.778 LIB libspdk_bdev_gpt.a 00:03:44.037 SO libspdk_bdev_gpt.so.6.0 00:03:44.037 SYMLINK libspdk_bdev_delay.so 00:03:44.037 CC module/bdev/split/vbdev_split.o 00:03:44.037 SYMLINK libspdk_bdev_gpt.so 00:03:44.037 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:44.037 LIB libspdk_bdev_malloc.a 00:03:44.037 CC module/bdev/null/bdev_null_rpc.o 00:03:44.037 SO libspdk_bdev_malloc.so.6.0 00:03:44.037 CC module/bdev/xnvme/bdev_xnvme.o 00:03:44.037 SYMLINK libspdk_bdev_malloc.so 00:03:44.037 CC module/bdev/split/vbdev_split_rpc.o 00:03:44.037 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:44.037 CC module/bdev/aio/bdev_aio.o 00:03:44.295 LIB libspdk_bdev_passthru.a 00:03:44.295 LIB libspdk_bdev_null.a 00:03:44.295 SO libspdk_bdev_passthru.so.6.0 00:03:44.295 SO libspdk_bdev_null.so.6.0 00:03:44.295 LIB libspdk_bdev_lvol.a 00:03:44.295 CC module/bdev/raid/bdev_raid_rpc.o 00:03:44.295 LIB libspdk_bdev_split.a 00:03:44.295 SYMLINK libspdk_bdev_passthru.so 00:03:44.295 CC module/bdev/raid/bdev_raid_sb.o 00:03:44.295 SO libspdk_bdev_lvol.so.6.0 00:03:44.295 SO libspdk_bdev_split.so.6.0 00:03:44.295 SYMLINK libspdk_bdev_null.so 00:03:44.295 LIB libspdk_bdev_xnvme.a 00:03:44.295 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:44.295 SO libspdk_bdev_xnvme.so.3.0 00:03:44.295 SYMLINK libspdk_bdev_lvol.so 00:03:44.295 SYMLINK libspdk_bdev_split.so 00:03:44.295 CC module/bdev/aio/bdev_aio_rpc.o 00:03:44.295 SYMLINK libspdk_bdev_xnvme.so 00:03:44.295 CC module/bdev/raid/raid0.o 00:03:44.295 CC module/bdev/ftl/bdev_ftl.o 00:03:44.553 LIB libspdk_bdev_zone_block.a 00:03:44.553 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:44.553 SO libspdk_bdev_zone_block.so.6.0 00:03:44.553 CC module/bdev/iscsi/bdev_iscsi.o 00:03:44.553 LIB libspdk_bdev_aio.a 00:03:44.553 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:44.553 SO libspdk_bdev_aio.so.6.0 00:03:44.553 SYMLINK libspdk_bdev_zone_block.so 00:03:44.553 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:44.553 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:44.553 SYMLINK libspdk_bdev_aio.so 00:03:44.553 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:44.553 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:44.810 CC module/bdev/nvme/nvme_rpc.o 00:03:44.810 CC module/bdev/raid/raid1.o 00:03:44.810 LIB libspdk_bdev_ftl.a 00:03:44.810 CC module/bdev/nvme/bdev_mdns_client.o 00:03:44.810 CC module/bdev/nvme/vbdev_opal.o 00:03:44.810 SO libspdk_bdev_ftl.so.6.0 00:03:44.810 SYMLINK libspdk_bdev_ftl.so 00:03:44.810 CC module/bdev/raid/concat.o 00:03:44.810 LIB libspdk_bdev_iscsi.a 00:03:44.810 SO libspdk_bdev_iscsi.so.6.0 00:03:44.810 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:44.810 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:44.810 SYMLINK libspdk_bdev_iscsi.so 00:03:45.068 LIB libspdk_bdev_virtio.a 00:03:45.068 SO libspdk_bdev_virtio.so.6.0 00:03:45.068 LIB libspdk_bdev_raid.a 00:03:45.068 SYMLINK libspdk_bdev_virtio.so 00:03:45.068 SO libspdk_bdev_raid.so.6.0 00:03:45.325 SYMLINK libspdk_bdev_raid.so 00:03:46.259 LIB libspdk_bdev_nvme.a 00:03:46.259 SO libspdk_bdev_nvme.so.7.1 00:03:46.259 SYMLINK libspdk_bdev_nvme.so 00:03:46.826 CC module/event/subsystems/iobuf/iobuf.o 00:03:46.826 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:46.826 CC module/event/subsystems/fsdev/fsdev.o 00:03:46.826 CC module/event/subsystems/vmd/vmd.o 00:03:46.826 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:46.826 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:46.826 CC module/event/subsystems/scheduler/scheduler.o 00:03:46.826 CC module/event/subsystems/keyring/keyring.o 00:03:46.826 CC module/event/subsystems/sock/sock.o 00:03:46.826 LIB libspdk_event_scheduler.a 00:03:46.826 LIB libspdk_event_fsdev.a 00:03:46.826 LIB libspdk_event_keyring.a 00:03:46.826 LIB libspdk_event_vhost_blk.a 00:03:46.826 LIB libspdk_event_vmd.a 00:03:46.826 LIB libspdk_event_iobuf.a 00:03:46.826 LIB libspdk_event_sock.a 00:03:46.826 SO libspdk_event_scheduler.so.4.0 00:03:46.826 SO libspdk_event_fsdev.so.1.0 00:03:46.826 SO libspdk_event_keyring.so.1.0 00:03:46.826 SO libspdk_event_vmd.so.6.0 00:03:46.826 SO libspdk_event_vhost_blk.so.3.0 00:03:46.826 SO libspdk_event_sock.so.5.0 00:03:46.826 SO libspdk_event_iobuf.so.3.0 00:03:46.826 SYMLINK libspdk_event_scheduler.so 00:03:46.826 SYMLINK libspdk_event_fsdev.so 00:03:46.826 SYMLINK libspdk_event_sock.so 00:03:46.826 SYMLINK libspdk_event_keyring.so 00:03:46.826 SYMLINK libspdk_event_vhost_blk.so 00:03:46.826 SYMLINK libspdk_event_vmd.so 00:03:46.826 SYMLINK libspdk_event_iobuf.so 00:03:47.084 CC module/event/subsystems/accel/accel.o 00:03:47.341 LIB libspdk_event_accel.a 00:03:47.341 SO libspdk_event_accel.so.6.0 00:03:47.341 SYMLINK libspdk_event_accel.so 00:03:47.599 CC module/event/subsystems/bdev/bdev.o 00:03:47.599 LIB libspdk_event_bdev.a 00:03:47.857 SO libspdk_event_bdev.so.6.0 00:03:47.857 SYMLINK libspdk_event_bdev.so 00:03:47.857 CC module/event/subsystems/ublk/ublk.o 00:03:47.857 CC module/event/subsystems/scsi/scsi.o 00:03:47.857 CC module/event/subsystems/nbd/nbd.o 00:03:47.857 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:47.857 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:48.116 LIB libspdk_event_ublk.a 00:03:48.116 LIB libspdk_event_nbd.a 00:03:48.116 LIB libspdk_event_scsi.a 00:03:48.116 SO libspdk_event_ublk.so.3.0 00:03:48.116 SO libspdk_event_nbd.so.6.0 00:03:48.116 SO libspdk_event_scsi.so.6.0 00:03:48.116 SYMLINK libspdk_event_ublk.so 00:03:48.116 SYMLINK libspdk_event_nbd.so 00:03:48.116 SYMLINK libspdk_event_scsi.so 00:03:48.116 LIB libspdk_event_nvmf.a 00:03:48.116 SO libspdk_event_nvmf.so.6.0 00:03:48.116 SYMLINK libspdk_event_nvmf.so 00:03:48.373 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:48.373 CC module/event/subsystems/iscsi/iscsi.o 00:03:48.373 LIB libspdk_event_vhost_scsi.a 00:03:48.373 SO libspdk_event_vhost_scsi.so.3.0 00:03:48.373 LIB libspdk_event_iscsi.a 00:03:48.373 SYMLINK libspdk_event_vhost_scsi.so 00:03:48.373 SO libspdk_event_iscsi.so.6.0 00:03:48.632 SYMLINK libspdk_event_iscsi.so 00:03:48.632 SO libspdk.so.6.0 00:03:48.632 SYMLINK libspdk.so 00:03:48.891 TEST_HEADER include/spdk/accel.h 00:03:48.891 TEST_HEADER include/spdk/accel_module.h 00:03:48.891 TEST_HEADER include/spdk/assert.h 00:03:48.891 TEST_HEADER include/spdk/barrier.h 00:03:48.891 TEST_HEADER include/spdk/base64.h 00:03:48.891 CC test/rpc_client/rpc_client_test.o 00:03:48.891 TEST_HEADER include/spdk/bdev.h 00:03:48.891 TEST_HEADER include/spdk/bdev_module.h 00:03:48.891 TEST_HEADER include/spdk/bdev_zone.h 00:03:48.891 TEST_HEADER include/spdk/bit_array.h 00:03:48.891 TEST_HEADER include/spdk/bit_pool.h 00:03:48.891 TEST_HEADER include/spdk/blob_bdev.h 00:03:48.891 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:48.891 TEST_HEADER include/spdk/blobfs.h 00:03:48.891 CXX app/trace/trace.o 00:03:48.891 TEST_HEADER include/spdk/blob.h 00:03:48.891 TEST_HEADER include/spdk/conf.h 00:03:48.891 TEST_HEADER include/spdk/config.h 00:03:48.891 TEST_HEADER include/spdk/cpuset.h 00:03:48.891 TEST_HEADER include/spdk/crc16.h 00:03:48.891 TEST_HEADER include/spdk/crc32.h 00:03:48.891 TEST_HEADER include/spdk/crc64.h 00:03:48.891 TEST_HEADER include/spdk/dif.h 00:03:48.891 TEST_HEADER include/spdk/dma.h 00:03:48.891 TEST_HEADER include/spdk/endian.h 00:03:48.891 TEST_HEADER include/spdk/env_dpdk.h 00:03:48.891 TEST_HEADER include/spdk/env.h 00:03:48.891 TEST_HEADER include/spdk/event.h 00:03:48.891 TEST_HEADER include/spdk/fd_group.h 00:03:48.891 TEST_HEADER include/spdk/fd.h 00:03:48.891 TEST_HEADER include/spdk/file.h 00:03:48.891 TEST_HEADER include/spdk/fsdev.h 00:03:48.891 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:48.891 TEST_HEADER include/spdk/fsdev_module.h 00:03:48.891 TEST_HEADER include/spdk/ftl.h 00:03:48.891 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:48.891 TEST_HEADER include/spdk/gpt_spec.h 00:03:48.891 TEST_HEADER include/spdk/hexlify.h 00:03:48.891 TEST_HEADER include/spdk/histogram_data.h 00:03:48.891 TEST_HEADER include/spdk/idxd.h 00:03:48.891 CC examples/util/zipf/zipf.o 00:03:48.891 TEST_HEADER include/spdk/idxd_spec.h 00:03:48.891 TEST_HEADER include/spdk/init.h 00:03:48.891 TEST_HEADER include/spdk/ioat.h 00:03:48.891 TEST_HEADER include/spdk/ioat_spec.h 00:03:48.891 TEST_HEADER include/spdk/iscsi_spec.h 00:03:48.891 TEST_HEADER include/spdk/json.h 00:03:48.891 TEST_HEADER include/spdk/jsonrpc.h 00:03:48.891 CC examples/ioat/perf/perf.o 00:03:48.891 TEST_HEADER include/spdk/keyring.h 00:03:48.891 CC test/thread/poller_perf/poller_perf.o 00:03:48.891 TEST_HEADER include/spdk/keyring_module.h 00:03:48.891 TEST_HEADER include/spdk/likely.h 00:03:48.891 TEST_HEADER include/spdk/log.h 00:03:48.891 TEST_HEADER include/spdk/lvol.h 00:03:48.891 CC test/dma/test_dma/test_dma.o 00:03:48.891 TEST_HEADER include/spdk/md5.h 00:03:48.891 TEST_HEADER include/spdk/memory.h 00:03:48.891 TEST_HEADER include/spdk/mmio.h 00:03:48.891 TEST_HEADER include/spdk/nbd.h 00:03:48.891 TEST_HEADER include/spdk/net.h 00:03:48.891 TEST_HEADER include/spdk/notify.h 00:03:48.891 CC test/app/bdev_svc/bdev_svc.o 00:03:48.891 TEST_HEADER include/spdk/nvme.h 00:03:48.891 TEST_HEADER include/spdk/nvme_intel.h 00:03:48.891 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:48.891 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:48.891 TEST_HEADER include/spdk/nvme_spec.h 00:03:48.891 TEST_HEADER include/spdk/nvme_zns.h 00:03:48.891 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:48.891 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:48.891 TEST_HEADER include/spdk/nvmf.h 00:03:48.891 TEST_HEADER include/spdk/nvmf_spec.h 00:03:48.891 TEST_HEADER include/spdk/nvmf_transport.h 00:03:48.891 TEST_HEADER include/spdk/opal.h 00:03:48.891 TEST_HEADER include/spdk/opal_spec.h 00:03:48.891 TEST_HEADER include/spdk/pci_ids.h 00:03:48.891 TEST_HEADER include/spdk/pipe.h 00:03:48.891 TEST_HEADER include/spdk/queue.h 00:03:48.891 TEST_HEADER include/spdk/reduce.h 00:03:48.891 TEST_HEADER include/spdk/rpc.h 00:03:48.891 TEST_HEADER include/spdk/scheduler.h 00:03:48.891 TEST_HEADER include/spdk/scsi.h 00:03:48.891 TEST_HEADER include/spdk/scsi_spec.h 00:03:48.891 TEST_HEADER include/spdk/sock.h 00:03:48.891 TEST_HEADER include/spdk/stdinc.h 00:03:48.891 TEST_HEADER include/spdk/string.h 00:03:48.891 TEST_HEADER include/spdk/thread.h 00:03:48.891 CC test/env/mem_callbacks/mem_callbacks.o 00:03:48.891 TEST_HEADER include/spdk/trace.h 00:03:48.891 TEST_HEADER include/spdk/trace_parser.h 00:03:48.891 TEST_HEADER include/spdk/tree.h 00:03:48.891 TEST_HEADER include/spdk/ublk.h 00:03:48.891 TEST_HEADER include/spdk/util.h 00:03:48.891 TEST_HEADER include/spdk/uuid.h 00:03:48.891 TEST_HEADER include/spdk/version.h 00:03:48.891 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:48.891 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:48.891 TEST_HEADER include/spdk/vhost.h 00:03:48.891 TEST_HEADER include/spdk/vmd.h 00:03:48.891 LINK zipf 00:03:48.891 TEST_HEADER include/spdk/xor.h 00:03:48.891 TEST_HEADER include/spdk/zipf.h 00:03:48.891 CXX test/cpp_headers/accel.o 00:03:48.891 LINK rpc_client_test 00:03:48.891 LINK poller_perf 00:03:49.149 LINK interrupt_tgt 00:03:49.149 LINK bdev_svc 00:03:49.149 LINK ioat_perf 00:03:49.149 CXX test/cpp_headers/accel_module.o 00:03:49.149 CC test/env/vtophys/vtophys.o 00:03:49.149 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:49.149 CC test/env/memory/memory_ut.o 00:03:49.149 CXX test/cpp_headers/assert.o 00:03:49.149 LINK spdk_trace 00:03:49.149 CC test/env/pci/pci_ut.o 00:03:49.149 CC examples/ioat/verify/verify.o 00:03:49.406 LINK vtophys 00:03:49.406 LINK env_dpdk_post_init 00:03:49.406 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:49.406 CXX test/cpp_headers/barrier.o 00:03:49.406 LINK test_dma 00:03:49.406 CC app/trace_record/trace_record.o 00:03:49.406 CXX test/cpp_headers/base64.o 00:03:49.406 LINK mem_callbacks 00:03:49.406 LINK verify 00:03:49.406 CC test/event/event_perf/event_perf.o 00:03:49.664 CXX test/cpp_headers/bdev.o 00:03:49.664 CC app/nvmf_tgt/nvmf_main.o 00:03:49.664 CC test/event/reactor/reactor.o 00:03:49.664 LINK spdk_trace_record 00:03:49.664 LINK nvme_fuzz 00:03:49.664 CC test/event/reactor_perf/reactor_perf.o 00:03:49.664 LINK event_perf 00:03:49.664 LINK reactor 00:03:49.664 LINK pci_ut 00:03:49.664 LINK nvmf_tgt 00:03:49.664 CXX test/cpp_headers/bdev_module.o 00:03:49.664 LINK reactor_perf 00:03:49.664 CC examples/thread/thread/thread_ex.o 00:03:49.664 CXX test/cpp_headers/bdev_zone.o 00:03:49.922 CC test/event/app_repeat/app_repeat.o 00:03:49.922 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:49.922 CC test/event/scheduler/scheduler.o 00:03:49.922 CC app/iscsi_tgt/iscsi_tgt.o 00:03:49.922 LINK app_repeat 00:03:49.922 CC app/spdk_tgt/spdk_tgt.o 00:03:49.922 CXX test/cpp_headers/bit_array.o 00:03:49.922 LINK thread 00:03:49.922 CC examples/vmd/lsvmd/lsvmd.o 00:03:49.922 CC examples/sock/hello_world/hello_sock.o 00:03:50.181 LINK iscsi_tgt 00:03:50.181 CXX test/cpp_headers/bit_pool.o 00:03:50.181 LINK scheduler 00:03:50.181 LINK spdk_tgt 00:03:50.181 CC examples/vmd/led/led.o 00:03:50.181 CXX test/cpp_headers/blob_bdev.o 00:03:50.181 LINK lsvmd 00:03:50.181 CXX test/cpp_headers/blobfs_bdev.o 00:03:50.181 LINK led 00:03:50.181 LINK hello_sock 00:03:50.181 LINK memory_ut 00:03:50.181 CC app/spdk_lspci/spdk_lspci.o 00:03:50.440 CXX test/cpp_headers/blobfs.o 00:03:50.440 CC app/spdk_nvme_perf/perf.o 00:03:50.440 CC examples/idxd/perf/perf.o 00:03:50.440 CC app/spdk_nvme_identify/identify.o 00:03:50.440 CC app/spdk_nvme_discover/discovery_aer.o 00:03:50.440 CXX test/cpp_headers/blob.o 00:03:50.440 LINK spdk_lspci 00:03:50.440 CC app/spdk_top/spdk_top.o 00:03:50.440 CXX test/cpp_headers/conf.o 00:03:50.440 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:50.440 CC app/vhost/vhost.o 00:03:50.440 LINK spdk_nvme_discover 00:03:50.698 CXX test/cpp_headers/config.o 00:03:50.698 CXX test/cpp_headers/cpuset.o 00:03:50.698 CC app/spdk_dd/spdk_dd.o 00:03:50.698 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:50.698 CXX test/cpp_headers/crc16.o 00:03:50.698 LINK vhost 00:03:50.698 LINK idxd_perf 00:03:50.698 CXX test/cpp_headers/crc32.o 00:03:50.956 CXX test/cpp_headers/crc64.o 00:03:50.956 CXX test/cpp_headers/dif.o 00:03:50.956 LINK spdk_dd 00:03:50.956 CXX test/cpp_headers/dma.o 00:03:50.956 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:51.213 LINK vhost_fuzz 00:03:51.213 CC app/fio/nvme/fio_plugin.o 00:03:51.213 CXX test/cpp_headers/endian.o 00:03:51.213 LINK spdk_nvme_identify 00:03:51.213 CC examples/accel/perf/accel_perf.o 00:03:51.213 LINK spdk_top 00:03:51.213 LINK spdk_nvme_perf 00:03:51.213 LINK hello_fsdev 00:03:51.213 CXX test/cpp_headers/env_dpdk.o 00:03:51.213 CXX test/cpp_headers/env.o 00:03:51.213 CC examples/blob/cli/blobcli.o 00:03:51.213 CXX test/cpp_headers/event.o 00:03:51.213 CC examples/blob/hello_world/hello_blob.o 00:03:51.213 CXX test/cpp_headers/fd_group.o 00:03:51.213 CXX test/cpp_headers/fd.o 00:03:51.471 CXX test/cpp_headers/file.o 00:03:51.471 CC app/fio/bdev/fio_plugin.o 00:03:51.471 LINK iscsi_fuzz 00:03:51.471 LINK hello_blob 00:03:51.471 CC test/accel/dif/dif.o 00:03:51.471 CXX test/cpp_headers/fsdev.o 00:03:51.729 CC test/blobfs/mkfs/mkfs.o 00:03:51.729 LINK accel_perf 00:03:51.729 LINK spdk_nvme 00:03:51.729 CC test/lvol/esnap/esnap.o 00:03:51.729 CXX test/cpp_headers/fsdev_module.o 00:03:51.729 CC test/app/histogram_perf/histogram_perf.o 00:03:51.729 CXX test/cpp_headers/ftl.o 00:03:51.729 LINK blobcli 00:03:51.729 LINK mkfs 00:03:51.729 CC examples/nvme/hello_world/hello_world.o 00:03:51.987 LINK histogram_perf 00:03:51.987 CC examples/nvme/reconnect/reconnect.o 00:03:51.987 CXX test/cpp_headers/fuse_dispatcher.o 00:03:51.987 LINK spdk_bdev 00:03:51.987 CC examples/bdev/hello_world/hello_bdev.o 00:03:51.987 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:51.987 CC test/app/jsoncat/jsoncat.o 00:03:51.987 LINK hello_world 00:03:51.987 CXX test/cpp_headers/gpt_spec.o 00:03:51.987 CC test/nvme/aer/aer.o 00:03:52.244 CC test/nvme/reset/reset.o 00:03:52.244 LINK hello_bdev 00:03:52.244 LINK jsoncat 00:03:52.244 CXX test/cpp_headers/hexlify.o 00:03:52.244 CC examples/nvme/arbitration/arbitration.o 00:03:52.244 LINK dif 00:03:52.244 LINK reconnect 00:03:52.244 LINK aer 00:03:52.244 LINK reset 00:03:52.244 CC test/app/stub/stub.o 00:03:52.244 LINK nvme_manage 00:03:52.244 CXX test/cpp_headers/histogram_data.o 00:03:52.502 CC examples/bdev/bdevperf/bdevperf.o 00:03:52.502 CC examples/nvme/hotplug/hotplug.o 00:03:52.502 CC test/nvme/sgl/sgl.o 00:03:52.502 CC test/nvme/e2edp/nvme_dp.o 00:03:52.502 CXX test/cpp_headers/idxd.o 00:03:52.502 LINK stub 00:03:52.502 CC test/nvme/overhead/overhead.o 00:03:52.502 CC test/nvme/err_injection/err_injection.o 00:03:52.502 LINK arbitration 00:03:52.760 LINK hotplug 00:03:52.760 CXX test/cpp_headers/idxd_spec.o 00:03:52.760 LINK err_injection 00:03:52.760 LINK sgl 00:03:52.760 LINK overhead 00:03:52.760 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:52.760 LINK nvme_dp 00:03:52.760 CC test/bdev/bdevio/bdevio.o 00:03:52.760 CXX test/cpp_headers/init.o 00:03:52.760 CC examples/nvme/abort/abort.o 00:03:53.019 CC test/nvme/startup/startup.o 00:03:53.019 CC test/nvme/reserve/reserve.o 00:03:53.019 LINK cmb_copy 00:03:53.019 CC test/nvme/connect_stress/connect_stress.o 00:03:53.019 CC test/nvme/simple_copy/simple_copy.o 00:03:53.019 CXX test/cpp_headers/ioat.o 00:03:53.019 LINK bdevperf 00:03:53.019 LINK reserve 00:03:53.019 LINK startup 00:03:53.019 CXX test/cpp_headers/ioat_spec.o 00:03:53.019 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:53.019 LINK connect_stress 00:03:53.278 LINK simple_copy 00:03:53.278 LINK bdevio 00:03:53.278 LINK abort 00:03:53.278 CXX test/cpp_headers/iscsi_spec.o 00:03:53.278 CC test/nvme/boot_partition/boot_partition.o 00:03:53.278 CC test/nvme/compliance/nvme_compliance.o 00:03:53.278 LINK pmr_persistence 00:03:53.278 CC test/nvme/fused_ordering/fused_ordering.o 00:03:53.278 CXX test/cpp_headers/json.o 00:03:53.278 CXX test/cpp_headers/jsonrpc.o 00:03:53.278 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:53.278 CC test/nvme/fdp/fdp.o 00:03:53.278 LINK boot_partition 00:03:53.540 CC test/nvme/cuse/cuse.o 00:03:53.540 LINK fused_ordering 00:03:53.540 CXX test/cpp_headers/keyring.o 00:03:53.540 CXX test/cpp_headers/keyring_module.o 00:03:53.540 LINK doorbell_aers 00:03:53.540 CXX test/cpp_headers/likely.o 00:03:53.540 LINK nvme_compliance 00:03:53.540 CC examples/nvmf/nvmf/nvmf.o 00:03:53.540 CXX test/cpp_headers/log.o 00:03:53.540 CXX test/cpp_headers/lvol.o 00:03:53.540 CXX test/cpp_headers/md5.o 00:03:53.799 CXX test/cpp_headers/memory.o 00:03:53.799 CXX test/cpp_headers/mmio.o 00:03:53.799 LINK fdp 00:03:53.799 CXX test/cpp_headers/nbd.o 00:03:53.799 CXX test/cpp_headers/net.o 00:03:53.799 CXX test/cpp_headers/notify.o 00:03:53.799 CXX test/cpp_headers/nvme.o 00:03:53.799 CXX test/cpp_headers/nvme_intel.o 00:03:53.799 CXX test/cpp_headers/nvme_ocssd.o 00:03:53.799 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:53.799 CXX test/cpp_headers/nvme_spec.o 00:03:53.799 CXX test/cpp_headers/nvme_zns.o 00:03:53.799 CXX test/cpp_headers/nvmf_cmd.o 00:03:53.799 LINK nvmf 00:03:53.799 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:54.056 CXX test/cpp_headers/nvmf.o 00:03:54.056 CXX test/cpp_headers/nvmf_spec.o 00:03:54.056 CXX test/cpp_headers/nvmf_transport.o 00:03:54.056 CXX test/cpp_headers/opal.o 00:03:54.056 CXX test/cpp_headers/opal_spec.o 00:03:54.056 CXX test/cpp_headers/pci_ids.o 00:03:54.056 CXX test/cpp_headers/pipe.o 00:03:54.056 CXX test/cpp_headers/queue.o 00:03:54.056 CXX test/cpp_headers/reduce.o 00:03:54.056 CXX test/cpp_headers/rpc.o 00:03:54.056 CXX test/cpp_headers/scheduler.o 00:03:54.056 CXX test/cpp_headers/scsi.o 00:03:54.056 CXX test/cpp_headers/scsi_spec.o 00:03:54.056 CXX test/cpp_headers/sock.o 00:03:54.056 CXX test/cpp_headers/stdinc.o 00:03:54.056 CXX test/cpp_headers/string.o 00:03:54.313 CXX test/cpp_headers/thread.o 00:03:54.313 CXX test/cpp_headers/trace.o 00:03:54.313 CXX test/cpp_headers/trace_parser.o 00:03:54.313 CXX test/cpp_headers/tree.o 00:03:54.313 CXX test/cpp_headers/ublk.o 00:03:54.313 CXX test/cpp_headers/util.o 00:03:54.313 CXX test/cpp_headers/uuid.o 00:03:54.313 CXX test/cpp_headers/version.o 00:03:54.313 CXX test/cpp_headers/vfio_user_pci.o 00:03:54.313 CXX test/cpp_headers/vfio_user_spec.o 00:03:54.313 CXX test/cpp_headers/vhost.o 00:03:54.313 CXX test/cpp_headers/vmd.o 00:03:54.313 CXX test/cpp_headers/xor.o 00:03:54.313 CXX test/cpp_headers/zipf.o 00:03:54.570 LINK cuse 00:03:56.473 LINK esnap 00:03:56.731 00:03:56.731 real 1m6.272s 00:03:56.731 user 6m8.253s 00:03:56.731 sys 1m6.095s 00:03:56.731 16:30:41 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:56.731 16:30:41 make -- common/autotest_common.sh@10 -- $ set +x 00:03:56.731 ************************************ 00:03:56.731 END TEST make 00:03:56.731 ************************************ 00:03:56.731 16:30:41 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:56.731 16:30:41 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:56.731 16:30:41 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:56.731 16:30:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:56.731 16:30:41 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:56.731 16:30:41 -- pm/common@44 -- $ pid=5062 00:03:56.731 16:30:41 -- pm/common@50 -- $ kill -TERM 5062 00:03:56.731 16:30:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:56.731 16:30:41 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:56.731 16:30:41 -- pm/common@44 -- $ pid=5063 00:03:56.731 16:30:41 -- pm/common@50 -- $ kill -TERM 5063 00:03:56.731 16:30:41 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:56.731 16:30:41 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:56.990 16:30:41 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:56.990 16:30:41 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:56.990 16:30:41 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:56.990 16:30:41 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:56.990 16:30:41 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:56.990 16:30:41 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:56.990 16:30:41 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:56.990 16:30:41 -- scripts/common.sh@336 -- # IFS=.-: 00:03:56.990 16:30:41 -- scripts/common.sh@336 -- # read -ra ver1 00:03:56.990 16:30:41 -- scripts/common.sh@337 -- # IFS=.-: 00:03:56.990 16:30:41 -- scripts/common.sh@337 -- # read -ra ver2 00:03:56.990 16:30:41 -- scripts/common.sh@338 -- # local 'op=<' 00:03:56.990 16:30:41 -- scripts/common.sh@340 -- # ver1_l=2 00:03:56.990 16:30:41 -- scripts/common.sh@341 -- # ver2_l=1 00:03:56.990 16:30:41 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:56.990 16:30:41 -- scripts/common.sh@344 -- # case "$op" in 00:03:56.990 16:30:41 -- scripts/common.sh@345 -- # : 1 00:03:56.990 16:30:41 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:56.990 16:30:41 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:56.990 16:30:41 -- scripts/common.sh@365 -- # decimal 1 00:03:56.990 16:30:41 -- scripts/common.sh@353 -- # local d=1 00:03:56.990 16:30:41 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:56.990 16:30:41 -- scripts/common.sh@355 -- # echo 1 00:03:56.990 16:30:41 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:56.990 16:30:41 -- scripts/common.sh@366 -- # decimal 2 00:03:56.990 16:30:41 -- scripts/common.sh@353 -- # local d=2 00:03:56.990 16:30:41 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:56.990 16:30:41 -- scripts/common.sh@355 -- # echo 2 00:03:56.990 16:30:41 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:56.990 16:30:41 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:56.990 16:30:41 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:56.990 16:30:41 -- scripts/common.sh@368 -- # return 0 00:03:56.990 16:30:41 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:56.990 16:30:41 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:56.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.990 --rc genhtml_branch_coverage=1 00:03:56.990 --rc genhtml_function_coverage=1 00:03:56.990 --rc genhtml_legend=1 00:03:56.990 --rc geninfo_all_blocks=1 00:03:56.990 --rc geninfo_unexecuted_blocks=1 00:03:56.990 00:03:56.990 ' 00:03:56.990 16:30:41 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:56.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.990 --rc genhtml_branch_coverage=1 00:03:56.990 --rc genhtml_function_coverage=1 00:03:56.990 --rc genhtml_legend=1 00:03:56.990 --rc geninfo_all_blocks=1 00:03:56.990 --rc geninfo_unexecuted_blocks=1 00:03:56.990 00:03:56.990 ' 00:03:56.990 16:30:41 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:56.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.990 --rc genhtml_branch_coverage=1 00:03:56.990 --rc genhtml_function_coverage=1 00:03:56.990 --rc genhtml_legend=1 00:03:56.990 --rc geninfo_all_blocks=1 00:03:56.990 --rc geninfo_unexecuted_blocks=1 00:03:56.990 00:03:56.990 ' 00:03:56.990 16:30:41 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:56.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.990 --rc genhtml_branch_coverage=1 00:03:56.991 --rc genhtml_function_coverage=1 00:03:56.991 --rc genhtml_legend=1 00:03:56.991 --rc geninfo_all_blocks=1 00:03:56.991 --rc geninfo_unexecuted_blocks=1 00:03:56.991 00:03:56.991 ' 00:03:56.991 16:30:41 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:56.991 16:30:41 -- nvmf/common.sh@7 -- # uname -s 00:03:56.991 16:30:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:56.991 16:30:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:56.991 16:30:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:56.991 16:30:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:56.991 16:30:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:56.991 16:30:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:56.991 16:30:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:56.991 16:30:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:56.991 16:30:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:56.991 16:30:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:56.991 16:30:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14dbd995-d808-4651-988a-ff7c615cd4c8 00:03:56.991 16:30:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=14dbd995-d808-4651-988a-ff7c615cd4c8 00:03:56.991 16:30:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:56.991 16:30:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:56.991 16:30:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:56.991 16:30:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:56.991 16:30:41 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:56.991 16:30:41 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:56.991 16:30:41 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:56.991 16:30:41 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:56.991 16:30:41 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:56.991 16:30:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.991 16:30:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.991 16:30:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.991 16:30:41 -- paths/export.sh@5 -- # export PATH 00:03:56.991 16:30:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.991 16:30:41 -- nvmf/common.sh@51 -- # : 0 00:03:56.991 16:30:41 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:56.991 16:30:41 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:56.991 16:30:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:56.991 16:30:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:56.991 16:30:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:56.991 16:30:41 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:56.991 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:56.991 16:30:41 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:56.991 16:30:41 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:56.991 16:30:41 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:56.991 16:30:41 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:56.991 16:30:41 -- spdk/autotest.sh@32 -- # uname -s 00:03:56.991 16:30:41 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:56.991 16:30:41 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:56.991 16:30:41 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:56.991 16:30:41 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:56.991 16:30:41 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:56.991 16:30:41 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:56.991 16:30:41 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:56.991 16:30:41 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:56.991 16:30:41 -- spdk/autotest.sh@48 -- # udevadm_pid=54219 00:03:56.991 16:30:41 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:56.991 16:30:41 -- pm/common@17 -- # local monitor 00:03:56.991 16:30:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:56.991 16:30:41 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:56.991 16:30:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:56.991 16:30:41 -- pm/common@25 -- # sleep 1 00:03:56.991 16:30:41 -- pm/common@21 -- # date +%s 00:03:56.991 16:30:41 -- pm/common@21 -- # date +%s 00:03:56.991 16:30:41 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732120241 00:03:56.991 16:30:41 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732120241 00:03:56.991 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732120241_collect-cpu-load.pm.log 00:03:56.991 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732120241_collect-vmstat.pm.log 00:03:57.938 16:30:42 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:57.938 16:30:42 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:57.938 16:30:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:57.938 16:30:42 -- common/autotest_common.sh@10 -- # set +x 00:03:57.938 16:30:42 -- spdk/autotest.sh@59 -- # create_test_list 00:03:57.938 16:30:42 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:57.938 16:30:42 -- common/autotest_common.sh@10 -- # set +x 00:03:58.200 16:30:42 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:58.200 16:30:42 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:58.200 16:30:42 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:58.200 16:30:42 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:58.200 16:30:42 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:58.200 16:30:42 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:58.200 16:30:42 -- common/autotest_common.sh@1457 -- # uname 00:03:58.200 16:30:42 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:58.200 16:30:42 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:58.200 16:30:42 -- common/autotest_common.sh@1477 -- # uname 00:03:58.200 16:30:42 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:58.200 16:30:42 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:58.200 16:30:42 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:58.201 lcov: LCOV version 1.15 00:03:58.201 16:30:42 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:13.098 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:13.098 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:28.074 16:31:12 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:28.074 16:31:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:28.074 16:31:12 -- common/autotest_common.sh@10 -- # set +x 00:04:28.074 16:31:12 -- spdk/autotest.sh@78 -- # rm -f 00:04:28.074 16:31:12 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:28.332 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:28.589 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:28.847 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:28.847 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:28.847 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:28.847 16:31:13 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:28.847 16:31:13 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:28.847 16:31:13 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:28.847 16:31:13 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:28.847 16:31:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:28.847 16:31:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:28.847 16:31:13 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:28.847 16:31:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:28.847 16:31:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:28.847 16:31:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:28.847 16:31:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:28.847 16:31:13 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:28.847 16:31:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:28.848 16:31:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:28.848 16:31:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:28.848 16:31:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:04:28.848 16:31:13 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:28.848 16:31:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:28.848 16:31:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:28.848 16:31:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:28.848 16:31:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:04:28.848 16:31:13 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:28.848 16:31:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:28.848 16:31:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:28.848 16:31:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:28.848 16:31:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:04:28.848 16:31:13 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:04:28.848 16:31:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:28.848 16:31:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:28.848 16:31:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:28.848 16:31:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:04:28.848 16:31:13 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:04:28.848 16:31:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:28.848 16:31:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:28.848 16:31:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:28.848 16:31:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:04:28.848 16:31:13 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:04:28.848 16:31:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:28.848 16:31:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:28.848 16:31:13 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:28.848 16:31:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:28.848 16:31:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:28.848 16:31:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:28.848 16:31:13 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:28.848 16:31:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:28.848 No valid GPT data, bailing 00:04:28.848 16:31:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:28.848 16:31:13 -- scripts/common.sh@394 -- # pt= 00:04:28.848 16:31:13 -- scripts/common.sh@395 -- # return 1 00:04:28.848 16:31:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:28.848 1+0 records in 00:04:28.848 1+0 records out 00:04:28.848 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103045 s, 102 MB/s 00:04:28.848 16:31:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:28.848 16:31:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:28.848 16:31:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:28.848 16:31:13 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:28.848 16:31:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:28.848 No valid GPT data, bailing 00:04:28.848 16:31:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:28.848 16:31:13 -- scripts/common.sh@394 -- # pt= 00:04:28.848 16:31:13 -- scripts/common.sh@395 -- # return 1 00:04:28.848 16:31:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:28.848 1+0 records in 00:04:28.848 1+0 records out 00:04:28.848 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00304244 s, 345 MB/s 00:04:28.848 16:31:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:28.848 16:31:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:28.848 16:31:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:28.848 16:31:13 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:28.848 16:31:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:28.848 No valid GPT data, bailing 00:04:28.848 16:31:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:28.848 16:31:13 -- scripts/common.sh@394 -- # pt= 00:04:28.848 16:31:13 -- scripts/common.sh@395 -- # return 1 00:04:28.848 16:31:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:28.848 1+0 records in 00:04:28.848 1+0 records out 00:04:28.848 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00509083 s, 206 MB/s 00:04:28.848 16:31:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:28.848 16:31:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:28.848 16:31:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:28.848 16:31:13 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:28.848 16:31:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:29.106 No valid GPT data, bailing 00:04:29.106 16:31:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:29.106 16:31:13 -- scripts/common.sh@394 -- # pt= 00:04:29.106 16:31:13 -- scripts/common.sh@395 -- # return 1 00:04:29.106 16:31:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:29.106 1+0 records in 00:04:29.106 1+0 records out 00:04:29.106 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0045499 s, 230 MB/s 00:04:29.106 16:31:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:29.106 16:31:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:29.106 16:31:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:29.106 16:31:13 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:29.106 16:31:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:29.106 No valid GPT data, bailing 00:04:29.106 16:31:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:29.106 16:31:13 -- scripts/common.sh@394 -- # pt= 00:04:29.106 16:31:13 -- scripts/common.sh@395 -- # return 1 00:04:29.106 16:31:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:29.106 1+0 records in 00:04:29.106 1+0 records out 00:04:29.106 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00344408 s, 304 MB/s 00:04:29.106 16:31:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:29.106 16:31:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:29.106 16:31:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:29.106 16:31:13 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:29.106 16:31:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:29.106 No valid GPT data, bailing 00:04:29.106 16:31:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:29.106 16:31:13 -- scripts/common.sh@394 -- # pt= 00:04:29.106 16:31:13 -- scripts/common.sh@395 -- # return 1 00:04:29.106 16:31:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:29.106 1+0 records in 00:04:29.106 1+0 records out 00:04:29.106 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00429033 s, 244 MB/s 00:04:29.106 16:31:13 -- spdk/autotest.sh@105 -- # sync 00:04:29.106 16:31:13 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:29.106 16:31:13 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:29.106 16:31:13 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:31.050 16:31:15 -- spdk/autotest.sh@111 -- # uname -s 00:04:31.050 16:31:15 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:31.050 16:31:15 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:31.050 16:31:15 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:31.050 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:31.614 Hugepages 00:04:31.614 node hugesize free / total 00:04:31.614 node0 1048576kB 0 / 0 00:04:31.614 node0 2048kB 0 / 0 00:04:31.614 00:04:31.614 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:31.614 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:31.614 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:31.614 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:04:31.614 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:31.871 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:31.871 16:31:16 -- spdk/autotest.sh@117 -- # uname -s 00:04:31.871 16:31:16 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:31.871 16:31:16 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:31.871 16:31:16 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:32.129 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:32.694 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.694 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.694 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.694 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.694 16:31:17 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:34.066 16:31:18 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:34.066 16:31:18 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:34.066 16:31:18 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:34.066 16:31:18 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:34.066 16:31:18 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:34.066 16:31:18 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:34.066 16:31:18 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:34.066 16:31:18 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:34.066 16:31:18 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:34.066 16:31:18 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:34.066 16:31:18 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:34.066 16:31:18 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:34.066 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:34.324 Waiting for block devices as requested 00:04:34.324 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:34.324 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:34.324 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:34.582 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:39.859 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:39.859 16:31:24 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:39.859 16:31:24 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:39.859 16:31:24 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:39.859 16:31:24 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:39.859 16:31:24 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:39.859 16:31:24 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:39.859 16:31:24 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:39.859 16:31:24 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:39.859 16:31:24 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:39.859 16:31:24 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:39.859 16:31:24 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:39.859 16:31:24 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:39.859 16:31:24 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:39.859 16:31:24 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:39.859 16:31:24 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:39.859 16:31:24 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:39.859 16:31:24 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:39.859 16:31:24 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:39.859 16:31:24 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:39.859 16:31:24 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:39.859 16:31:24 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:39.859 16:31:24 -- common/autotest_common.sh@1543 -- # continue 00:04:39.859 16:31:24 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:39.859 16:31:24 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:39.859 16:31:24 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:39.859 16:31:24 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:39.859 16:31:24 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:39.859 16:31:24 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:39.859 16:31:24 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:39.859 16:31:24 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:39.859 16:31:24 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:39.859 16:31:24 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:39.859 16:31:24 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:39.859 16:31:24 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:39.859 16:31:24 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:39.859 16:31:24 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:39.859 16:31:24 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:39.859 16:31:24 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:39.859 16:31:24 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:39.859 16:31:24 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:39.859 16:31:24 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:39.859 16:31:24 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:39.859 16:31:24 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:39.859 16:31:24 -- common/autotest_common.sh@1543 -- # continue 00:04:39.859 16:31:24 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:39.859 16:31:24 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:39.859 16:31:24 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:39.859 16:31:24 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:39.859 16:31:24 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:39.859 16:31:24 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:39.859 16:31:24 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:39.859 16:31:24 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:39.859 16:31:24 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:39.859 16:31:24 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:39.859 16:31:24 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:39.859 16:31:24 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:39.859 16:31:24 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:39.859 16:31:24 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:39.859 16:31:24 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:39.859 16:31:24 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:39.859 16:31:24 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:39.859 16:31:24 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:39.859 16:31:24 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:39.859 16:31:24 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:39.859 16:31:24 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:39.859 16:31:24 -- common/autotest_common.sh@1543 -- # continue 00:04:39.859 16:31:24 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:39.859 16:31:24 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:39.859 16:31:24 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:39.859 16:31:24 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:39.859 16:31:24 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:39.859 16:31:24 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:39.859 16:31:24 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:39.859 16:31:24 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:39.859 16:31:24 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:39.859 16:31:24 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:39.859 16:31:24 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:39.859 16:31:24 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:39.860 16:31:24 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:39.860 16:31:24 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:39.860 16:31:24 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:39.860 16:31:24 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:39.860 16:31:24 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:39.860 16:31:24 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:39.860 16:31:24 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:39.860 16:31:24 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:39.860 16:31:24 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:39.860 16:31:24 -- common/autotest_common.sh@1543 -- # continue 00:04:39.860 16:31:24 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:39.860 16:31:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:39.860 16:31:24 -- common/autotest_common.sh@10 -- # set +x 00:04:39.860 16:31:24 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:39.860 16:31:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:39.860 16:31:24 -- common/autotest_common.sh@10 -- # set +x 00:04:39.860 16:31:24 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:40.118 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:40.685 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:40.685 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:40.685 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:40.685 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:40.685 16:31:25 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:40.685 16:31:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:40.685 16:31:25 -- common/autotest_common.sh@10 -- # set +x 00:04:40.685 16:31:25 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:40.685 16:31:25 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:40.685 16:31:25 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:40.685 16:31:25 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:40.685 16:31:25 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:40.685 16:31:25 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:40.685 16:31:25 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:40.685 16:31:25 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:40.685 16:31:25 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:40.685 16:31:25 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:40.685 16:31:25 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:40.685 16:31:25 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:40.685 16:31:25 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:40.685 16:31:25 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:40.685 16:31:25 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:40.685 16:31:25 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:40.685 16:31:25 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:40.685 16:31:25 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:40.685 16:31:25 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:40.685 16:31:25 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:40.685 16:31:25 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:40.685 16:31:25 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:40.685 16:31:25 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:40.685 16:31:25 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:40.685 16:31:25 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:40.685 16:31:25 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:40.685 16:31:25 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:40.685 16:31:25 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:40.685 16:31:25 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:40.685 16:31:25 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:40.685 16:31:25 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:40.685 16:31:25 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:40.685 16:31:25 -- common/autotest_common.sh@1572 -- # return 0 00:04:40.685 16:31:25 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:40.685 16:31:25 -- common/autotest_common.sh@1580 -- # return 0 00:04:40.685 16:31:25 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:40.685 16:31:25 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:40.685 16:31:25 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:40.685 16:31:25 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:40.685 16:31:25 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:40.685 16:31:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:40.685 16:31:25 -- common/autotest_common.sh@10 -- # set +x 00:04:40.685 16:31:25 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:40.685 16:31:25 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:40.685 16:31:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.685 16:31:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.685 16:31:25 -- common/autotest_common.sh@10 -- # set +x 00:04:40.685 ************************************ 00:04:40.685 START TEST env 00:04:40.685 ************************************ 00:04:40.685 16:31:25 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:40.685 * Looking for test storage... 00:04:40.944 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:40.944 16:31:25 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:40.944 16:31:25 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:40.944 16:31:25 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:40.944 16:31:25 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:40.944 16:31:25 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.944 16:31:25 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.944 16:31:25 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.944 16:31:25 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.944 16:31:25 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.944 16:31:25 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.944 16:31:25 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.944 16:31:25 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.944 16:31:25 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.944 16:31:25 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.944 16:31:25 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.944 16:31:25 env -- scripts/common.sh@344 -- # case "$op" in 00:04:40.944 16:31:25 env -- scripts/common.sh@345 -- # : 1 00:04:40.944 16:31:25 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.944 16:31:25 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.944 16:31:25 env -- scripts/common.sh@365 -- # decimal 1 00:04:40.944 16:31:25 env -- scripts/common.sh@353 -- # local d=1 00:04:40.944 16:31:25 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.944 16:31:25 env -- scripts/common.sh@355 -- # echo 1 00:04:40.944 16:31:25 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.944 16:31:25 env -- scripts/common.sh@366 -- # decimal 2 00:04:40.944 16:31:25 env -- scripts/common.sh@353 -- # local d=2 00:04:40.944 16:31:25 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.944 16:31:25 env -- scripts/common.sh@355 -- # echo 2 00:04:40.944 16:31:25 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.944 16:31:25 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.944 16:31:25 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.944 16:31:25 env -- scripts/common.sh@368 -- # return 0 00:04:40.944 16:31:25 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.944 16:31:25 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:40.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.944 --rc genhtml_branch_coverage=1 00:04:40.944 --rc genhtml_function_coverage=1 00:04:40.944 --rc genhtml_legend=1 00:04:40.944 --rc geninfo_all_blocks=1 00:04:40.944 --rc geninfo_unexecuted_blocks=1 00:04:40.944 00:04:40.944 ' 00:04:40.944 16:31:25 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:40.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.944 --rc genhtml_branch_coverage=1 00:04:40.944 --rc genhtml_function_coverage=1 00:04:40.944 --rc genhtml_legend=1 00:04:40.944 --rc geninfo_all_blocks=1 00:04:40.944 --rc geninfo_unexecuted_blocks=1 00:04:40.944 00:04:40.944 ' 00:04:40.944 16:31:25 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:40.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.944 --rc genhtml_branch_coverage=1 00:04:40.944 --rc genhtml_function_coverage=1 00:04:40.944 --rc genhtml_legend=1 00:04:40.944 --rc geninfo_all_blocks=1 00:04:40.944 --rc geninfo_unexecuted_blocks=1 00:04:40.944 00:04:40.944 ' 00:04:40.944 16:31:25 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:40.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.944 --rc genhtml_branch_coverage=1 00:04:40.944 --rc genhtml_function_coverage=1 00:04:40.944 --rc genhtml_legend=1 00:04:40.944 --rc geninfo_all_blocks=1 00:04:40.944 --rc geninfo_unexecuted_blocks=1 00:04:40.944 00:04:40.944 ' 00:04:40.944 16:31:25 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:40.945 16:31:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.945 16:31:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.945 16:31:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:40.945 ************************************ 00:04:40.945 START TEST env_memory 00:04:40.945 ************************************ 00:04:40.945 16:31:25 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:40.945 00:04:40.945 00:04:40.945 CUnit - A unit testing framework for C - Version 2.1-3 00:04:40.945 http://cunit.sourceforge.net/ 00:04:40.945 00:04:40.945 00:04:40.945 Suite: memory 00:04:40.945 Test: alloc and free memory map ...[2024-11-20 16:31:25.701823] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:40.945 passed 00:04:40.945 Test: mem map translation ...[2024-11-20 16:31:25.740570] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:40.945 [2024-11-20 16:31:25.740616] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:40.945 [2024-11-20 16:31:25.740674] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:40.945 [2024-11-20 16:31:25.740688] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:40.945 passed 00:04:40.945 Test: mem map registration ...[2024-11-20 16:31:25.808638] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:40.945 [2024-11-20 16:31:25.808675] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:41.203 passed 00:04:41.203 Test: mem map adjacent registrations ...passed 00:04:41.203 00:04:41.203 Run Summary: Type Total Ran Passed Failed Inactive 00:04:41.203 suites 1 1 n/a 0 0 00:04:41.203 tests 4 4 4 0 0 00:04:41.203 asserts 152 152 152 0 n/a 00:04:41.203 00:04:41.203 Elapsed time = 0.232 seconds 00:04:41.203 00:04:41.203 real 0m0.266s 00:04:41.203 user 0m0.242s 00:04:41.203 sys 0m0.016s 00:04:41.203 16:31:25 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.203 16:31:25 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:41.203 ************************************ 00:04:41.203 END TEST env_memory 00:04:41.203 ************************************ 00:04:41.203 16:31:25 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:41.203 16:31:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.203 16:31:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.203 16:31:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.203 ************************************ 00:04:41.203 START TEST env_vtophys 00:04:41.203 ************************************ 00:04:41.203 16:31:25 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:41.203 EAL: lib.eal log level changed from notice to debug 00:04:41.203 EAL: Detected lcore 0 as core 0 on socket 0 00:04:41.203 EAL: Detected lcore 1 as core 0 on socket 0 00:04:41.203 EAL: Detected lcore 2 as core 0 on socket 0 00:04:41.203 EAL: Detected lcore 3 as core 0 on socket 0 00:04:41.203 EAL: Detected lcore 4 as core 0 on socket 0 00:04:41.203 EAL: Detected lcore 5 as core 0 on socket 0 00:04:41.203 EAL: Detected lcore 6 as core 0 on socket 0 00:04:41.203 EAL: Detected lcore 7 as core 0 on socket 0 00:04:41.203 EAL: Detected lcore 8 as core 0 on socket 0 00:04:41.203 EAL: Detected lcore 9 as core 0 on socket 0 00:04:41.203 EAL: Maximum logical cores by configuration: 128 00:04:41.203 EAL: Detected CPU lcores: 10 00:04:41.203 EAL: Detected NUMA nodes: 1 00:04:41.203 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:41.203 EAL: Detected shared linkage of DPDK 00:04:41.203 EAL: No shared files mode enabled, IPC will be disabled 00:04:41.203 EAL: Selected IOVA mode 'PA' 00:04:41.203 EAL: Probing VFIO support... 00:04:41.203 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:41.203 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:41.203 EAL: Ask a virtual area of 0x2e000 bytes 00:04:41.203 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:41.203 EAL: Setting up physically contiguous memory... 00:04:41.203 EAL: Setting maximum number of open files to 524288 00:04:41.203 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:41.203 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:41.203 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.203 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:41.203 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:41.203 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.203 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:41.203 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:41.203 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.203 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:41.203 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:41.203 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.203 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:41.203 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:41.203 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.203 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:41.203 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:41.203 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.203 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:41.203 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:41.203 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.203 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:41.203 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:41.203 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.203 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:41.203 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:41.203 EAL: Hugepages will be freed exactly as allocated. 00:04:41.203 EAL: No shared files mode enabled, IPC is disabled 00:04:41.203 EAL: No shared files mode enabled, IPC is disabled 00:04:41.462 EAL: TSC frequency is ~2600000 KHz 00:04:41.462 EAL: Main lcore 0 is ready (tid=7fb4da92ba40;cpuset=[0]) 00:04:41.462 EAL: Trying to obtain current memory policy. 00:04:41.462 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.462 EAL: Restoring previous memory policy: 0 00:04:41.462 EAL: request: mp_malloc_sync 00:04:41.462 EAL: No shared files mode enabled, IPC is disabled 00:04:41.462 EAL: Heap on socket 0 was expanded by 2MB 00:04:41.462 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:41.462 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:41.462 EAL: Mem event callback 'spdk:(nil)' registered 00:04:41.462 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:41.462 00:04:41.462 00:04:41.462 CUnit - A unit testing framework for C - Version 2.1-3 00:04:41.462 http://cunit.sourceforge.net/ 00:04:41.462 00:04:41.462 00:04:41.462 Suite: components_suite 00:04:41.721 Test: vtophys_malloc_test ...passed 00:04:41.721 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:41.721 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.721 EAL: Restoring previous memory policy: 4 00:04:41.721 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.721 EAL: request: mp_malloc_sync 00:04:41.721 EAL: No shared files mode enabled, IPC is disabled 00:04:41.721 EAL: Heap on socket 0 was expanded by 4MB 00:04:41.721 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.721 EAL: request: mp_malloc_sync 00:04:41.721 EAL: No shared files mode enabled, IPC is disabled 00:04:41.721 EAL: Heap on socket 0 was shrunk by 4MB 00:04:41.721 EAL: Trying to obtain current memory policy. 00:04:41.721 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.721 EAL: Restoring previous memory policy: 4 00:04:41.721 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.721 EAL: request: mp_malloc_sync 00:04:41.721 EAL: No shared files mode enabled, IPC is disabled 00:04:41.721 EAL: Heap on socket 0 was expanded by 6MB 00:04:41.721 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.721 EAL: request: mp_malloc_sync 00:04:41.721 EAL: No shared files mode enabled, IPC is disabled 00:04:41.721 EAL: Heap on socket 0 was shrunk by 6MB 00:04:41.721 EAL: Trying to obtain current memory policy. 00:04:41.721 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.721 EAL: Restoring previous memory policy: 4 00:04:41.721 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.721 EAL: request: mp_malloc_sync 00:04:41.721 EAL: No shared files mode enabled, IPC is disabled 00:04:41.721 EAL: Heap on socket 0 was expanded by 10MB 00:04:41.721 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.721 EAL: request: mp_malloc_sync 00:04:41.721 EAL: No shared files mode enabled, IPC is disabled 00:04:41.721 EAL: Heap on socket 0 was shrunk by 10MB 00:04:41.721 EAL: Trying to obtain current memory policy. 00:04:41.721 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.721 EAL: Restoring previous memory policy: 4 00:04:41.721 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.721 EAL: request: mp_malloc_sync 00:04:41.721 EAL: No shared files mode enabled, IPC is disabled 00:04:41.721 EAL: Heap on socket 0 was expanded by 18MB 00:04:41.721 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.721 EAL: request: mp_malloc_sync 00:04:41.721 EAL: No shared files mode enabled, IPC is disabled 00:04:41.721 EAL: Heap on socket 0 was shrunk by 18MB 00:04:41.721 EAL: Trying to obtain current memory policy. 00:04:41.721 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.721 EAL: Restoring previous memory policy: 4 00:04:41.721 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.721 EAL: request: mp_malloc_sync 00:04:41.721 EAL: No shared files mode enabled, IPC is disabled 00:04:41.721 EAL: Heap on socket 0 was expanded by 34MB 00:04:41.721 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.721 EAL: request: mp_malloc_sync 00:04:41.721 EAL: No shared files mode enabled, IPC is disabled 00:04:41.721 EAL: Heap on socket 0 was shrunk by 34MB 00:04:41.721 EAL: Trying to obtain current memory policy. 00:04:41.721 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.721 EAL: Restoring previous memory policy: 4 00:04:41.721 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.721 EAL: request: mp_malloc_sync 00:04:41.721 EAL: No shared files mode enabled, IPC is disabled 00:04:41.721 EAL: Heap on socket 0 was expanded by 66MB 00:04:41.979 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.979 EAL: request: mp_malloc_sync 00:04:41.979 EAL: No shared files mode enabled, IPC is disabled 00:04:41.979 EAL: Heap on socket 0 was shrunk by 66MB 00:04:41.979 EAL: Trying to obtain current memory policy. 00:04:41.979 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.979 EAL: Restoring previous memory policy: 4 00:04:41.979 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.979 EAL: request: mp_malloc_sync 00:04:41.979 EAL: No shared files mode enabled, IPC is disabled 00:04:41.979 EAL: Heap on socket 0 was expanded by 130MB 00:04:42.238 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.238 EAL: request: mp_malloc_sync 00:04:42.238 EAL: No shared files mode enabled, IPC is disabled 00:04:42.238 EAL: Heap on socket 0 was shrunk by 130MB 00:04:42.238 EAL: Trying to obtain current memory policy. 00:04:42.238 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.238 EAL: Restoring previous memory policy: 4 00:04:42.238 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.238 EAL: request: mp_malloc_sync 00:04:42.238 EAL: No shared files mode enabled, IPC is disabled 00:04:42.238 EAL: Heap on socket 0 was expanded by 258MB 00:04:42.496 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.496 EAL: request: mp_malloc_sync 00:04:42.496 EAL: No shared files mode enabled, IPC is disabled 00:04:42.496 EAL: Heap on socket 0 was shrunk by 258MB 00:04:42.754 EAL: Trying to obtain current memory policy. 00:04:42.754 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.013 EAL: Restoring previous memory policy: 4 00:04:43.013 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.013 EAL: request: mp_malloc_sync 00:04:43.013 EAL: No shared files mode enabled, IPC is disabled 00:04:43.013 EAL: Heap on socket 0 was expanded by 514MB 00:04:43.580 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.580 EAL: request: mp_malloc_sync 00:04:43.580 EAL: No shared files mode enabled, IPC is disabled 00:04:43.580 EAL: Heap on socket 0 was shrunk by 514MB 00:04:44.145 EAL: Trying to obtain current memory policy. 00:04:44.145 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.145 EAL: Restoring previous memory policy: 4 00:04:44.145 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.145 EAL: request: mp_malloc_sync 00:04:44.145 EAL: No shared files mode enabled, IPC is disabled 00:04:44.145 EAL: Heap on socket 0 was expanded by 1026MB 00:04:45.519 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.519 EAL: request: mp_malloc_sync 00:04:45.519 EAL: No shared files mode enabled, IPC is disabled 00:04:45.519 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:46.453 passed 00:04:46.453 00:04:46.453 Run Summary: Type Total Ran Passed Failed Inactive 00:04:46.453 suites 1 1 n/a 0 0 00:04:46.453 tests 2 2 2 0 0 00:04:46.453 asserts 5705 5705 5705 0 n/a 00:04:46.453 00:04:46.453 Elapsed time = 4.925 seconds 00:04:46.453 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.453 EAL: request: mp_malloc_sync 00:04:46.453 EAL: No shared files mode enabled, IPC is disabled 00:04:46.453 EAL: Heap on socket 0 was shrunk by 2MB 00:04:46.453 EAL: No shared files mode enabled, IPC is disabled 00:04:46.453 EAL: No shared files mode enabled, IPC is disabled 00:04:46.453 EAL: No shared files mode enabled, IPC is disabled 00:04:46.453 00:04:46.453 real 0m5.183s 00:04:46.453 user 0m4.415s 00:04:46.453 sys 0m0.621s 00:04:46.453 16:31:31 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.453 16:31:31 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:46.453 ************************************ 00:04:46.453 END TEST env_vtophys 00:04:46.453 ************************************ 00:04:46.453 16:31:31 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:46.453 16:31:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.453 16:31:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.453 16:31:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:46.453 ************************************ 00:04:46.453 START TEST env_pci 00:04:46.453 ************************************ 00:04:46.453 16:31:31 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:46.453 00:04:46.453 00:04:46.453 CUnit - A unit testing framework for C - Version 2.1-3 00:04:46.453 http://cunit.sourceforge.net/ 00:04:46.453 00:04:46.453 00:04:46.453 Suite: pci 00:04:46.453 Test: pci_hook ...[2024-11-20 16:31:31.196790] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57000 has claimed it 00:04:46.453 passed 00:04:46.453 00:04:46.453 EAL: Cannot find device (10000:00:01.0) 00:04:46.453 EAL: Failed to attach device on primary process 00:04:46.453 Run Summary: Type Total Ran Passed Failed Inactive 00:04:46.453 suites 1 1 n/a 0 0 00:04:46.453 tests 1 1 1 0 0 00:04:46.453 asserts 25 25 25 0 n/a 00:04:46.453 00:04:46.453 Elapsed time = 0.005 seconds 00:04:46.453 00:04:46.453 real 0m0.062s 00:04:46.453 user 0m0.032s 00:04:46.453 sys 0m0.030s 00:04:46.453 16:31:31 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.453 16:31:31 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:46.453 ************************************ 00:04:46.453 END TEST env_pci 00:04:46.453 ************************************ 00:04:46.453 16:31:31 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:46.453 16:31:31 env -- env/env.sh@15 -- # uname 00:04:46.453 16:31:31 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:46.453 16:31:31 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:46.453 16:31:31 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:46.453 16:31:31 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:46.453 16:31:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.453 16:31:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:46.453 ************************************ 00:04:46.453 START TEST env_dpdk_post_init 00:04:46.453 ************************************ 00:04:46.453 16:31:31 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:46.453 EAL: Detected CPU lcores: 10 00:04:46.453 EAL: Detected NUMA nodes: 1 00:04:46.453 EAL: Detected shared linkage of DPDK 00:04:46.453 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:46.453 EAL: Selected IOVA mode 'PA' 00:04:46.712 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:46.712 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:46.712 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:46.712 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:46.712 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:46.712 Starting DPDK initialization... 00:04:46.712 Starting SPDK post initialization... 00:04:46.712 SPDK NVMe probe 00:04:46.712 Attaching to 0000:00:10.0 00:04:46.712 Attaching to 0000:00:11.0 00:04:46.712 Attaching to 0000:00:12.0 00:04:46.712 Attaching to 0000:00:13.0 00:04:46.712 Attached to 0000:00:10.0 00:04:46.712 Attached to 0000:00:11.0 00:04:46.712 Attached to 0000:00:13.0 00:04:46.712 Attached to 0000:00:12.0 00:04:46.712 Cleaning up... 00:04:46.712 00:04:46.712 real 0m0.235s 00:04:46.712 user 0m0.084s 00:04:46.712 sys 0m0.053s 00:04:46.712 16:31:31 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.712 16:31:31 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:46.712 ************************************ 00:04:46.712 END TEST env_dpdk_post_init 00:04:46.712 ************************************ 00:04:46.712 16:31:31 env -- env/env.sh@26 -- # uname 00:04:46.712 16:31:31 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:46.712 16:31:31 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:46.712 16:31:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.712 16:31:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.712 16:31:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:46.712 ************************************ 00:04:46.712 START TEST env_mem_callbacks 00:04:46.712 ************************************ 00:04:46.712 16:31:31 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:46.712 EAL: Detected CPU lcores: 10 00:04:46.712 EAL: Detected NUMA nodes: 1 00:04:46.712 EAL: Detected shared linkage of DPDK 00:04:46.712 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:46.971 EAL: Selected IOVA mode 'PA' 00:04:46.971 00:04:46.971 00:04:46.971 CUnit - A unit testing framework for C - Version 2.1-3 00:04:46.971 http://cunit.sourceforge.net/ 00:04:46.971 00:04:46.971 00:04:46.971 Suite: memory 00:04:46.971 Test: test ... 00:04:46.971 register 0x200000200000 2097152 00:04:46.971 malloc 3145728 00:04:46.971 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:46.971 register 0x200000400000 4194304 00:04:46.971 buf 0x2000004fffc0 len 3145728 PASSED 00:04:46.971 malloc 64 00:04:46.971 buf 0x2000004ffec0 len 64 PASSED 00:04:46.971 malloc 4194304 00:04:46.971 register 0x200000800000 6291456 00:04:46.971 buf 0x2000009fffc0 len 4194304 PASSED 00:04:46.971 free 0x2000004fffc0 3145728 00:04:46.971 free 0x2000004ffec0 64 00:04:46.971 unregister 0x200000400000 4194304 PASSED 00:04:46.971 free 0x2000009fffc0 4194304 00:04:46.971 unregister 0x200000800000 6291456 PASSED 00:04:46.971 malloc 8388608 00:04:46.971 register 0x200000400000 10485760 00:04:46.971 buf 0x2000005fffc0 len 8388608 PASSED 00:04:46.971 free 0x2000005fffc0 8388608 00:04:46.971 unregister 0x200000400000 10485760 PASSED 00:04:46.971 passed 00:04:46.971 00:04:46.971 Run Summary: Type Total Ran Passed Failed Inactive 00:04:46.971 suites 1 1 n/a 0 0 00:04:46.971 tests 1 1 1 0 0 00:04:46.971 asserts 15 15 15 0 n/a 00:04:46.971 00:04:46.971 Elapsed time = 0.046 seconds 00:04:46.971 00:04:46.971 real 0m0.210s 00:04:46.971 user 0m0.065s 00:04:46.971 sys 0m0.042s 00:04:46.971 16:31:31 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.971 16:31:31 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:46.971 ************************************ 00:04:46.971 END TEST env_mem_callbacks 00:04:46.971 ************************************ 00:04:46.971 00:04:46.971 real 0m6.291s 00:04:46.971 user 0m4.989s 00:04:46.971 sys 0m0.953s 00:04:46.971 16:31:31 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.971 16:31:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:46.971 ************************************ 00:04:46.971 END TEST env 00:04:46.971 ************************************ 00:04:46.971 16:31:31 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:46.971 16:31:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.971 16:31:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.971 16:31:31 -- common/autotest_common.sh@10 -- # set +x 00:04:46.971 ************************************ 00:04:46.971 START TEST rpc 00:04:46.971 ************************************ 00:04:46.971 16:31:31 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:47.230 * Looking for test storage... 00:04:47.230 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:47.230 16:31:31 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:47.230 16:31:31 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:47.230 16:31:31 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:47.230 16:31:31 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:47.230 16:31:31 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.230 16:31:31 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.230 16:31:31 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.230 16:31:31 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.230 16:31:31 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.230 16:31:31 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.230 16:31:31 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.230 16:31:31 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.230 16:31:31 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.230 16:31:31 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.230 16:31:31 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.230 16:31:31 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:47.230 16:31:31 rpc -- scripts/common.sh@345 -- # : 1 00:04:47.230 16:31:31 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.230 16:31:31 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.230 16:31:31 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:47.230 16:31:31 rpc -- scripts/common.sh@353 -- # local d=1 00:04:47.230 16:31:31 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.230 16:31:31 rpc -- scripts/common.sh@355 -- # echo 1 00:04:47.230 16:31:31 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.230 16:31:31 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:47.230 16:31:31 rpc -- scripts/common.sh@353 -- # local d=2 00:04:47.230 16:31:31 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.230 16:31:31 rpc -- scripts/common.sh@355 -- # echo 2 00:04:47.230 16:31:31 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.230 16:31:31 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.230 16:31:31 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.230 16:31:31 rpc -- scripts/common.sh@368 -- # return 0 00:04:47.230 16:31:31 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.230 16:31:31 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:47.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.230 --rc genhtml_branch_coverage=1 00:04:47.230 --rc genhtml_function_coverage=1 00:04:47.230 --rc genhtml_legend=1 00:04:47.230 --rc geninfo_all_blocks=1 00:04:47.230 --rc geninfo_unexecuted_blocks=1 00:04:47.230 00:04:47.230 ' 00:04:47.230 16:31:31 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:47.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.230 --rc genhtml_branch_coverage=1 00:04:47.230 --rc genhtml_function_coverage=1 00:04:47.230 --rc genhtml_legend=1 00:04:47.230 --rc geninfo_all_blocks=1 00:04:47.230 --rc geninfo_unexecuted_blocks=1 00:04:47.230 00:04:47.230 ' 00:04:47.230 16:31:31 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:47.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.230 --rc genhtml_branch_coverage=1 00:04:47.230 --rc genhtml_function_coverage=1 00:04:47.230 --rc genhtml_legend=1 00:04:47.230 --rc geninfo_all_blocks=1 00:04:47.230 --rc geninfo_unexecuted_blocks=1 00:04:47.230 00:04:47.230 ' 00:04:47.230 16:31:31 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:47.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.230 --rc genhtml_branch_coverage=1 00:04:47.230 --rc genhtml_function_coverage=1 00:04:47.230 --rc genhtml_legend=1 00:04:47.230 --rc geninfo_all_blocks=1 00:04:47.230 --rc geninfo_unexecuted_blocks=1 00:04:47.230 00:04:47.230 ' 00:04:47.230 16:31:31 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57126 00:04:47.230 16:31:31 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:47.230 16:31:31 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57126 00:04:47.230 16:31:31 rpc -- common/autotest_common.sh@835 -- # '[' -z 57126 ']' 00:04:47.230 16:31:31 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.230 16:31:31 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.230 16:31:31 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.230 16:31:31 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.230 16:31:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.230 16:31:31 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:47.230 [2024-11-20 16:31:32.036410] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:04:47.230 [2024-11-20 16:31:32.036524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57126 ] 00:04:47.488 [2024-11-20 16:31:32.194139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.488 [2024-11-20 16:31:32.289812] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:47.488 [2024-11-20 16:31:32.289860] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57126' to capture a snapshot of events at runtime. 00:04:47.488 [2024-11-20 16:31:32.289871] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:47.488 [2024-11-20 16:31:32.289880] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:47.488 [2024-11-20 16:31:32.289887] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57126 for offline analysis/debug. 00:04:47.488 [2024-11-20 16:31:32.290729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.054 16:31:32 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.054 16:31:32 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:48.054 16:31:32 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:48.054 16:31:32 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:48.054 16:31:32 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:48.054 16:31:32 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:48.054 16:31:32 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.054 16:31:32 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.054 16:31:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.054 ************************************ 00:04:48.054 START TEST rpc_integrity 00:04:48.055 ************************************ 00:04:48.055 16:31:32 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:48.055 16:31:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:48.055 16:31:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.055 16:31:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.055 16:31:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.055 16:31:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:48.055 16:31:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:48.055 16:31:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:48.055 16:31:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:48.055 16:31:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.055 16:31:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.055 16:31:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.055 16:31:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:48.055 16:31:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:48.055 16:31:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.055 16:31:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.314 16:31:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.314 16:31:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:48.314 { 00:04:48.314 "name": "Malloc0", 00:04:48.314 "aliases": [ 00:04:48.314 "fbe7641f-bec5-4209-b326-263671919f99" 00:04:48.314 ], 00:04:48.314 "product_name": "Malloc disk", 00:04:48.314 "block_size": 512, 00:04:48.314 "num_blocks": 16384, 00:04:48.314 "uuid": "fbe7641f-bec5-4209-b326-263671919f99", 00:04:48.314 "assigned_rate_limits": { 00:04:48.314 "rw_ios_per_sec": 0, 00:04:48.314 "rw_mbytes_per_sec": 0, 00:04:48.314 "r_mbytes_per_sec": 0, 00:04:48.314 "w_mbytes_per_sec": 0 00:04:48.314 }, 00:04:48.314 "claimed": false, 00:04:48.314 "zoned": false, 00:04:48.314 "supported_io_types": { 00:04:48.314 "read": true, 00:04:48.314 "write": true, 00:04:48.314 "unmap": true, 00:04:48.314 "flush": true, 00:04:48.314 "reset": true, 00:04:48.314 "nvme_admin": false, 00:04:48.314 "nvme_io": false, 00:04:48.314 "nvme_io_md": false, 00:04:48.314 "write_zeroes": true, 00:04:48.314 "zcopy": true, 00:04:48.314 "get_zone_info": false, 00:04:48.314 "zone_management": false, 00:04:48.314 "zone_append": false, 00:04:48.314 "compare": false, 00:04:48.314 "compare_and_write": false, 00:04:48.314 "abort": true, 00:04:48.314 "seek_hole": false, 00:04:48.314 "seek_data": false, 00:04:48.314 "copy": true, 00:04:48.314 "nvme_iov_md": false 00:04:48.314 }, 00:04:48.314 "memory_domains": [ 00:04:48.314 { 00:04:48.314 "dma_device_id": "system", 00:04:48.314 "dma_device_type": 1 00:04:48.314 }, 00:04:48.314 { 00:04:48.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:48.314 "dma_device_type": 2 00:04:48.314 } 00:04:48.314 ], 00:04:48.314 "driver_specific": {} 00:04:48.314 } 00:04:48.314 ]' 00:04:48.314 16:31:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:48.314 16:31:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:48.314 16:31:32 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:48.314 16:31:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.314 16:31:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.314 [2024-11-20 16:31:32.978523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:48.314 [2024-11-20 16:31:32.978577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:48.314 [2024-11-20 16:31:32.978602] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:48.314 [2024-11-20 16:31:32.978614] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:48.314 [2024-11-20 16:31:32.980724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:48.314 [2024-11-20 16:31:32.980765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:48.314 Passthru0 00:04:48.314 16:31:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.314 16:31:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:48.314 16:31:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.314 16:31:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.314 16:31:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.314 16:31:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:48.314 { 00:04:48.314 "name": "Malloc0", 00:04:48.314 "aliases": [ 00:04:48.314 "fbe7641f-bec5-4209-b326-263671919f99" 00:04:48.314 ], 00:04:48.314 "product_name": "Malloc disk", 00:04:48.314 "block_size": 512, 00:04:48.314 "num_blocks": 16384, 00:04:48.314 "uuid": "fbe7641f-bec5-4209-b326-263671919f99", 00:04:48.314 "assigned_rate_limits": { 00:04:48.314 "rw_ios_per_sec": 0, 00:04:48.314 "rw_mbytes_per_sec": 0, 00:04:48.314 "r_mbytes_per_sec": 0, 00:04:48.314 "w_mbytes_per_sec": 0 00:04:48.314 }, 00:04:48.314 "claimed": true, 00:04:48.314 "claim_type": "exclusive_write", 00:04:48.314 "zoned": false, 00:04:48.314 "supported_io_types": { 00:04:48.314 "read": true, 00:04:48.314 "write": true, 00:04:48.314 "unmap": true, 00:04:48.314 "flush": true, 00:04:48.314 "reset": true, 00:04:48.314 "nvme_admin": false, 00:04:48.314 "nvme_io": false, 00:04:48.314 "nvme_io_md": false, 00:04:48.314 "write_zeroes": true, 00:04:48.314 "zcopy": true, 00:04:48.314 "get_zone_info": false, 00:04:48.314 "zone_management": false, 00:04:48.314 "zone_append": false, 00:04:48.314 "compare": false, 00:04:48.314 "compare_and_write": false, 00:04:48.314 "abort": true, 00:04:48.314 "seek_hole": false, 00:04:48.314 "seek_data": false, 00:04:48.314 "copy": true, 00:04:48.314 "nvme_iov_md": false 00:04:48.314 }, 00:04:48.314 "memory_domains": [ 00:04:48.314 { 00:04:48.314 "dma_device_id": "system", 00:04:48.314 "dma_device_type": 1 00:04:48.314 }, 00:04:48.314 { 00:04:48.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:48.314 "dma_device_type": 2 00:04:48.314 } 00:04:48.314 ], 00:04:48.314 "driver_specific": {} 00:04:48.314 }, 00:04:48.314 { 00:04:48.314 "name": "Passthru0", 00:04:48.314 "aliases": [ 00:04:48.314 "389d1187-da77-5d7b-aab4-a9c88471e596" 00:04:48.314 ], 00:04:48.314 "product_name": "passthru", 00:04:48.314 "block_size": 512, 00:04:48.314 "num_blocks": 16384, 00:04:48.315 "uuid": "389d1187-da77-5d7b-aab4-a9c88471e596", 00:04:48.315 "assigned_rate_limits": { 00:04:48.315 "rw_ios_per_sec": 0, 00:04:48.315 "rw_mbytes_per_sec": 0, 00:04:48.315 "r_mbytes_per_sec": 0, 00:04:48.315 "w_mbytes_per_sec": 0 00:04:48.315 }, 00:04:48.315 "claimed": false, 00:04:48.315 "zoned": false, 00:04:48.315 "supported_io_types": { 00:04:48.315 "read": true, 00:04:48.315 "write": true, 00:04:48.315 "unmap": true, 00:04:48.315 "flush": true, 00:04:48.315 "reset": true, 00:04:48.315 "nvme_admin": false, 00:04:48.315 "nvme_io": false, 00:04:48.315 "nvme_io_md": false, 00:04:48.315 "write_zeroes": true, 00:04:48.315 "zcopy": true, 00:04:48.315 "get_zone_info": false, 00:04:48.315 "zone_management": false, 00:04:48.315 "zone_append": false, 00:04:48.315 "compare": false, 00:04:48.315 "compare_and_write": false, 00:04:48.315 "abort": true, 00:04:48.315 "seek_hole": false, 00:04:48.315 "seek_data": false, 00:04:48.315 "copy": true, 00:04:48.315 "nvme_iov_md": false 00:04:48.315 }, 00:04:48.315 "memory_domains": [ 00:04:48.315 { 00:04:48.315 "dma_device_id": "system", 00:04:48.315 "dma_device_type": 1 00:04:48.315 }, 00:04:48.315 { 00:04:48.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:48.315 "dma_device_type": 2 00:04:48.315 } 00:04:48.315 ], 00:04:48.315 "driver_specific": { 00:04:48.315 "passthru": { 00:04:48.315 "name": "Passthru0", 00:04:48.315 "base_bdev_name": "Malloc0" 00:04:48.315 } 00:04:48.315 } 00:04:48.315 } 00:04:48.315 ]' 00:04:48.315 16:31:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:48.315 16:31:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:48.315 16:31:33 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:48.315 16:31:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.315 16:31:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.315 16:31:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.315 16:31:33 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:48.315 16:31:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.315 16:31:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.315 16:31:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.315 16:31:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:48.315 16:31:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.315 16:31:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.315 16:31:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.315 16:31:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:48.315 16:31:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:48.315 16:31:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:48.315 00:04:48.315 real 0m0.232s 00:04:48.315 user 0m0.119s 00:04:48.315 sys 0m0.032s 00:04:48.315 16:31:33 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.315 ************************************ 00:04:48.315 END TEST rpc_integrity 00:04:48.315 ************************************ 00:04:48.315 16:31:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.315 16:31:33 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:48.315 16:31:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.315 16:31:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.315 16:31:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.315 ************************************ 00:04:48.315 START TEST rpc_plugins 00:04:48.315 ************************************ 00:04:48.315 16:31:33 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:48.315 16:31:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:48.315 16:31:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.315 16:31:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:48.315 16:31:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.315 16:31:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:48.315 16:31:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:48.315 16:31:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.315 16:31:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:48.315 16:31:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.315 16:31:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:48.315 { 00:04:48.315 "name": "Malloc1", 00:04:48.315 "aliases": [ 00:04:48.315 "24ea57ef-9b3d-422a-b31a-25be65b4f586" 00:04:48.315 ], 00:04:48.315 "product_name": "Malloc disk", 00:04:48.315 "block_size": 4096, 00:04:48.315 "num_blocks": 256, 00:04:48.315 "uuid": "24ea57ef-9b3d-422a-b31a-25be65b4f586", 00:04:48.315 "assigned_rate_limits": { 00:04:48.315 "rw_ios_per_sec": 0, 00:04:48.315 "rw_mbytes_per_sec": 0, 00:04:48.315 "r_mbytes_per_sec": 0, 00:04:48.315 "w_mbytes_per_sec": 0 00:04:48.315 }, 00:04:48.315 "claimed": false, 00:04:48.315 "zoned": false, 00:04:48.315 "supported_io_types": { 00:04:48.315 "read": true, 00:04:48.315 "write": true, 00:04:48.315 "unmap": true, 00:04:48.315 "flush": true, 00:04:48.315 "reset": true, 00:04:48.315 "nvme_admin": false, 00:04:48.315 "nvme_io": false, 00:04:48.315 "nvme_io_md": false, 00:04:48.315 "write_zeroes": true, 00:04:48.315 "zcopy": true, 00:04:48.315 "get_zone_info": false, 00:04:48.315 "zone_management": false, 00:04:48.315 "zone_append": false, 00:04:48.315 "compare": false, 00:04:48.315 "compare_and_write": false, 00:04:48.315 "abort": true, 00:04:48.315 "seek_hole": false, 00:04:48.315 "seek_data": false, 00:04:48.315 "copy": true, 00:04:48.315 "nvme_iov_md": false 00:04:48.315 }, 00:04:48.315 "memory_domains": [ 00:04:48.315 { 00:04:48.315 "dma_device_id": "system", 00:04:48.315 "dma_device_type": 1 00:04:48.315 }, 00:04:48.315 { 00:04:48.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:48.315 "dma_device_type": 2 00:04:48.315 } 00:04:48.315 ], 00:04:48.315 "driver_specific": {} 00:04:48.315 } 00:04:48.315 ]' 00:04:48.315 16:31:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:48.574 16:31:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:48.574 16:31:33 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:48.574 16:31:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.574 16:31:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:48.574 16:31:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.574 16:31:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:48.574 16:31:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.574 16:31:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:48.574 16:31:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.574 16:31:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:48.574 16:31:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:48.574 16:31:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:48.574 00:04:48.574 real 0m0.115s 00:04:48.574 user 0m0.055s 00:04:48.574 sys 0m0.024s 00:04:48.574 16:31:33 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.574 16:31:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:48.574 ************************************ 00:04:48.574 END TEST rpc_plugins 00:04:48.574 ************************************ 00:04:48.574 16:31:33 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:48.574 16:31:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.574 16:31:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.574 16:31:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.574 ************************************ 00:04:48.574 START TEST rpc_trace_cmd_test 00:04:48.574 ************************************ 00:04:48.574 16:31:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:48.574 16:31:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:48.574 16:31:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:48.574 16:31:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.574 16:31:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:48.574 16:31:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.574 16:31:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:48.574 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57126", 00:04:48.574 "tpoint_group_mask": "0x8", 00:04:48.574 "iscsi_conn": { 00:04:48.574 "mask": "0x2", 00:04:48.574 "tpoint_mask": "0x0" 00:04:48.574 }, 00:04:48.574 "scsi": { 00:04:48.574 "mask": "0x4", 00:04:48.574 "tpoint_mask": "0x0" 00:04:48.574 }, 00:04:48.574 "bdev": { 00:04:48.574 "mask": "0x8", 00:04:48.574 "tpoint_mask": "0xffffffffffffffff" 00:04:48.574 }, 00:04:48.574 "nvmf_rdma": { 00:04:48.574 "mask": "0x10", 00:04:48.574 "tpoint_mask": "0x0" 00:04:48.574 }, 00:04:48.574 "nvmf_tcp": { 00:04:48.574 "mask": "0x20", 00:04:48.574 "tpoint_mask": "0x0" 00:04:48.574 }, 00:04:48.574 "ftl": { 00:04:48.574 "mask": "0x40", 00:04:48.574 "tpoint_mask": "0x0" 00:04:48.574 }, 00:04:48.574 "blobfs": { 00:04:48.574 "mask": "0x80", 00:04:48.574 "tpoint_mask": "0x0" 00:04:48.574 }, 00:04:48.574 "dsa": { 00:04:48.574 "mask": "0x200", 00:04:48.574 "tpoint_mask": "0x0" 00:04:48.574 }, 00:04:48.574 "thread": { 00:04:48.574 "mask": "0x400", 00:04:48.574 "tpoint_mask": "0x0" 00:04:48.574 }, 00:04:48.574 "nvme_pcie": { 00:04:48.574 "mask": "0x800", 00:04:48.574 "tpoint_mask": "0x0" 00:04:48.574 }, 00:04:48.574 "iaa": { 00:04:48.574 "mask": "0x1000", 00:04:48.574 "tpoint_mask": "0x0" 00:04:48.574 }, 00:04:48.574 "nvme_tcp": { 00:04:48.574 "mask": "0x2000", 00:04:48.574 "tpoint_mask": "0x0" 00:04:48.574 }, 00:04:48.574 "bdev_nvme": { 00:04:48.574 "mask": "0x4000", 00:04:48.574 "tpoint_mask": "0x0" 00:04:48.574 }, 00:04:48.574 "sock": { 00:04:48.574 "mask": "0x8000", 00:04:48.574 "tpoint_mask": "0x0" 00:04:48.574 }, 00:04:48.574 "blob": { 00:04:48.574 "mask": "0x10000", 00:04:48.574 "tpoint_mask": "0x0" 00:04:48.574 }, 00:04:48.574 "bdev_raid": { 00:04:48.574 "mask": "0x20000", 00:04:48.574 "tpoint_mask": "0x0" 00:04:48.574 }, 00:04:48.574 "scheduler": { 00:04:48.574 "mask": "0x40000", 00:04:48.574 "tpoint_mask": "0x0" 00:04:48.574 } 00:04:48.574 }' 00:04:48.574 16:31:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:48.574 16:31:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:48.574 16:31:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:48.574 16:31:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:48.574 16:31:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:48.574 16:31:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:48.574 16:31:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:48.574 16:31:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:48.574 16:31:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:48.574 16:31:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:48.574 00:04:48.574 real 0m0.164s 00:04:48.574 user 0m0.135s 00:04:48.574 sys 0m0.020s 00:04:48.574 16:31:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.574 16:31:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:48.574 ************************************ 00:04:48.574 END TEST rpc_trace_cmd_test 00:04:48.574 ************************************ 00:04:48.833 16:31:33 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:48.833 16:31:33 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:48.833 16:31:33 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:48.833 16:31:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.833 16:31:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.833 16:31:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.833 ************************************ 00:04:48.833 START TEST rpc_daemon_integrity 00:04:48.833 ************************************ 00:04:48.833 16:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:48.833 16:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:48.833 16:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.833 16:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.833 16:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.833 16:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:48.833 16:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:48.833 16:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:48.833 16:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:48.833 16:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.833 16:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.833 16:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.833 16:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:48.833 16:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:48.833 16:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.833 16:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.833 16:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.833 16:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:48.833 { 00:04:48.833 "name": "Malloc2", 00:04:48.833 "aliases": [ 00:04:48.833 "03be418a-387a-46a5-8793-bb06f6722c88" 00:04:48.833 ], 00:04:48.833 "product_name": "Malloc disk", 00:04:48.833 "block_size": 512, 00:04:48.833 "num_blocks": 16384, 00:04:48.833 "uuid": "03be418a-387a-46a5-8793-bb06f6722c88", 00:04:48.833 "assigned_rate_limits": { 00:04:48.833 "rw_ios_per_sec": 0, 00:04:48.833 "rw_mbytes_per_sec": 0, 00:04:48.833 "r_mbytes_per_sec": 0, 00:04:48.833 "w_mbytes_per_sec": 0 00:04:48.833 }, 00:04:48.833 "claimed": false, 00:04:48.833 "zoned": false, 00:04:48.833 "supported_io_types": { 00:04:48.833 "read": true, 00:04:48.833 "write": true, 00:04:48.833 "unmap": true, 00:04:48.833 "flush": true, 00:04:48.833 "reset": true, 00:04:48.834 "nvme_admin": false, 00:04:48.834 "nvme_io": false, 00:04:48.834 "nvme_io_md": false, 00:04:48.834 "write_zeroes": true, 00:04:48.834 "zcopy": true, 00:04:48.834 "get_zone_info": false, 00:04:48.834 "zone_management": false, 00:04:48.834 "zone_append": false, 00:04:48.834 "compare": false, 00:04:48.834 "compare_and_write": false, 00:04:48.834 "abort": true, 00:04:48.834 "seek_hole": false, 00:04:48.834 "seek_data": false, 00:04:48.834 "copy": true, 00:04:48.834 "nvme_iov_md": false 00:04:48.834 }, 00:04:48.834 "memory_domains": [ 00:04:48.834 { 00:04:48.834 "dma_device_id": "system", 00:04:48.834 "dma_device_type": 1 00:04:48.834 }, 00:04:48.834 { 00:04:48.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:48.834 "dma_device_type": 2 00:04:48.834 } 00:04:48.834 ], 00:04:48.834 "driver_specific": {} 00:04:48.834 } 00:04:48.834 ]' 00:04:48.834 16:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:48.834 16:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:48.834 16:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:48.834 16:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.834 16:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.834 [2024-11-20 16:31:33.593805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:48.834 [2024-11-20 16:31:33.593857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:48.834 [2024-11-20 16:31:33.593876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:48.834 [2024-11-20 16:31:33.593886] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:48.834 [2024-11-20 16:31:33.596087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:48.834 [2024-11-20 16:31:33.596129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:48.834 Passthru0 00:04:48.834 16:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.834 16:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:48.834 16:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.834 16:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.834 16:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.834 16:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:48.834 { 00:04:48.834 "name": "Malloc2", 00:04:48.834 "aliases": [ 00:04:48.834 "03be418a-387a-46a5-8793-bb06f6722c88" 00:04:48.834 ], 00:04:48.834 "product_name": "Malloc disk", 00:04:48.834 "block_size": 512, 00:04:48.834 "num_blocks": 16384, 00:04:48.834 "uuid": "03be418a-387a-46a5-8793-bb06f6722c88", 00:04:48.834 "assigned_rate_limits": { 00:04:48.834 "rw_ios_per_sec": 0, 00:04:48.834 "rw_mbytes_per_sec": 0, 00:04:48.834 "r_mbytes_per_sec": 0, 00:04:48.834 "w_mbytes_per_sec": 0 00:04:48.834 }, 00:04:48.834 "claimed": true, 00:04:48.834 "claim_type": "exclusive_write", 00:04:48.834 "zoned": false, 00:04:48.834 "supported_io_types": { 00:04:48.834 "read": true, 00:04:48.834 "write": true, 00:04:48.834 "unmap": true, 00:04:48.834 "flush": true, 00:04:48.834 "reset": true, 00:04:48.834 "nvme_admin": false, 00:04:48.834 "nvme_io": false, 00:04:48.834 "nvme_io_md": false, 00:04:48.834 "write_zeroes": true, 00:04:48.834 "zcopy": true, 00:04:48.834 "get_zone_info": false, 00:04:48.834 "zone_management": false, 00:04:48.834 "zone_append": false, 00:04:48.834 "compare": false, 00:04:48.834 "compare_and_write": false, 00:04:48.834 "abort": true, 00:04:48.834 "seek_hole": false, 00:04:48.834 "seek_data": false, 00:04:48.834 "copy": true, 00:04:48.834 "nvme_iov_md": false 00:04:48.834 }, 00:04:48.834 "memory_domains": [ 00:04:48.834 { 00:04:48.834 "dma_device_id": "system", 00:04:48.834 "dma_device_type": 1 00:04:48.834 }, 00:04:48.834 { 00:04:48.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:48.834 "dma_device_type": 2 00:04:48.834 } 00:04:48.834 ], 00:04:48.834 "driver_specific": {} 00:04:48.834 }, 00:04:48.834 { 00:04:48.834 "name": "Passthru0", 00:04:48.834 "aliases": [ 00:04:48.834 "ee920d69-a0da-573c-84b3-d95f804c89c5" 00:04:48.834 ], 00:04:48.834 "product_name": "passthru", 00:04:48.834 "block_size": 512, 00:04:48.834 "num_blocks": 16384, 00:04:48.834 "uuid": "ee920d69-a0da-573c-84b3-d95f804c89c5", 00:04:48.834 "assigned_rate_limits": { 00:04:48.834 "rw_ios_per_sec": 0, 00:04:48.834 "rw_mbytes_per_sec": 0, 00:04:48.834 "r_mbytes_per_sec": 0, 00:04:48.834 "w_mbytes_per_sec": 0 00:04:48.834 }, 00:04:48.834 "claimed": false, 00:04:48.834 "zoned": false, 00:04:48.834 "supported_io_types": { 00:04:48.834 "read": true, 00:04:48.834 "write": true, 00:04:48.834 "unmap": true, 00:04:48.834 "flush": true, 00:04:48.834 "reset": true, 00:04:48.834 "nvme_admin": false, 00:04:48.834 "nvme_io": false, 00:04:48.834 "nvme_io_md": false, 00:04:48.834 "write_zeroes": true, 00:04:48.834 "zcopy": true, 00:04:48.834 "get_zone_info": false, 00:04:48.834 "zone_management": false, 00:04:48.834 "zone_append": false, 00:04:48.834 "compare": false, 00:04:48.834 "compare_and_write": false, 00:04:48.834 "abort": true, 00:04:48.834 "seek_hole": false, 00:04:48.834 "seek_data": false, 00:04:48.834 "copy": true, 00:04:48.834 "nvme_iov_md": false 00:04:48.834 }, 00:04:48.834 "memory_domains": [ 00:04:48.834 { 00:04:48.834 "dma_device_id": "system", 00:04:48.834 "dma_device_type": 1 00:04:48.834 }, 00:04:48.834 { 00:04:48.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:48.834 "dma_device_type": 2 00:04:48.834 } 00:04:48.834 ], 00:04:48.834 "driver_specific": { 00:04:48.834 "passthru": { 00:04:48.834 "name": "Passthru0", 00:04:48.834 "base_bdev_name": "Malloc2" 00:04:48.834 } 00:04:48.834 } 00:04:48.834 } 00:04:48.834 ]' 00:04:48.834 16:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:48.834 16:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:48.834 16:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:48.834 16:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.834 16:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.834 16:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.834 16:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:48.834 16:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.834 16:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.834 16:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.834 16:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:48.834 16:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.834 16:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.834 16:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.834 16:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:48.834 16:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:49.093 16:31:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:49.093 00:04:49.093 real 0m0.228s 00:04:49.093 user 0m0.128s 00:04:49.093 sys 0m0.029s 00:04:49.093 16:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.093 16:31:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.093 ************************************ 00:04:49.093 END TEST rpc_daemon_integrity 00:04:49.093 ************************************ 00:04:49.093 16:31:33 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:49.093 16:31:33 rpc -- rpc/rpc.sh@84 -- # killprocess 57126 00:04:49.093 16:31:33 rpc -- common/autotest_common.sh@954 -- # '[' -z 57126 ']' 00:04:49.093 16:31:33 rpc -- common/autotest_common.sh@958 -- # kill -0 57126 00:04:49.093 16:31:33 rpc -- common/autotest_common.sh@959 -- # uname 00:04:49.093 16:31:33 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:49.093 16:31:33 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57126 00:04:49.093 16:31:33 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:49.093 16:31:33 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:49.093 killing process with pid 57126 00:04:49.093 16:31:33 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57126' 00:04:49.093 16:31:33 rpc -- common/autotest_common.sh@973 -- # kill 57126 00:04:49.093 16:31:33 rpc -- common/autotest_common.sh@978 -- # wait 57126 00:04:50.466 00:04:50.466 real 0m3.341s 00:04:50.466 user 0m3.751s 00:04:50.466 sys 0m0.573s 00:04:50.466 16:31:35 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.466 ************************************ 00:04:50.466 END TEST rpc 00:04:50.466 16:31:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.466 ************************************ 00:04:50.466 16:31:35 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:50.466 16:31:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.466 16:31:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.466 16:31:35 -- common/autotest_common.sh@10 -- # set +x 00:04:50.466 ************************************ 00:04:50.466 START TEST skip_rpc 00:04:50.466 ************************************ 00:04:50.466 16:31:35 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:50.466 * Looking for test storage... 00:04:50.466 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:50.466 16:31:35 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:50.466 16:31:35 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:50.466 16:31:35 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:50.466 16:31:35 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:50.466 16:31:35 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.466 16:31:35 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.466 16:31:35 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.466 16:31:35 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.466 16:31:35 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.466 16:31:35 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.466 16:31:35 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.466 16:31:35 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.466 16:31:35 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.466 16:31:35 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.466 16:31:35 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.466 16:31:35 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:50.466 16:31:35 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:50.466 16:31:35 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.466 16:31:35 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.466 16:31:35 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:50.466 16:31:35 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:50.466 16:31:35 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.466 16:31:35 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:50.466 16:31:35 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.466 16:31:35 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:50.466 16:31:35 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:50.467 16:31:35 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.467 16:31:35 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:50.726 16:31:35 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.726 16:31:35 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.726 16:31:35 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.726 16:31:35 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:50.726 16:31:35 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.726 16:31:35 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:50.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.726 --rc genhtml_branch_coverage=1 00:04:50.726 --rc genhtml_function_coverage=1 00:04:50.726 --rc genhtml_legend=1 00:04:50.726 --rc geninfo_all_blocks=1 00:04:50.726 --rc geninfo_unexecuted_blocks=1 00:04:50.726 00:04:50.726 ' 00:04:50.726 16:31:35 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:50.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.726 --rc genhtml_branch_coverage=1 00:04:50.726 --rc genhtml_function_coverage=1 00:04:50.726 --rc genhtml_legend=1 00:04:50.726 --rc geninfo_all_blocks=1 00:04:50.726 --rc geninfo_unexecuted_blocks=1 00:04:50.726 00:04:50.726 ' 00:04:50.726 16:31:35 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:50.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.726 --rc genhtml_branch_coverage=1 00:04:50.726 --rc genhtml_function_coverage=1 00:04:50.726 --rc genhtml_legend=1 00:04:50.726 --rc geninfo_all_blocks=1 00:04:50.726 --rc geninfo_unexecuted_blocks=1 00:04:50.726 00:04:50.726 ' 00:04:50.726 16:31:35 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:50.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.726 --rc genhtml_branch_coverage=1 00:04:50.726 --rc genhtml_function_coverage=1 00:04:50.726 --rc genhtml_legend=1 00:04:50.726 --rc geninfo_all_blocks=1 00:04:50.726 --rc geninfo_unexecuted_blocks=1 00:04:50.726 00:04:50.726 ' 00:04:50.726 16:31:35 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:50.726 16:31:35 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:50.726 16:31:35 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:50.726 16:31:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.726 16:31:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.726 16:31:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.726 ************************************ 00:04:50.726 START TEST skip_rpc 00:04:50.726 ************************************ 00:04:50.726 16:31:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:50.726 16:31:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57338 00:04:50.726 16:31:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.726 16:31:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:50.726 16:31:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:50.726 [2024-11-20 16:31:35.430625] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:04:50.726 [2024-11-20 16:31:35.430738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57338 ] 00:04:50.726 [2024-11-20 16:31:35.592616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.984 [2024-11-20 16:31:35.692596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.262 16:31:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:56.262 16:31:40 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:56.262 16:31:40 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:56.262 16:31:40 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:56.262 16:31:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.262 16:31:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:56.262 16:31:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.262 16:31:40 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:56.262 16:31:40 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.262 16:31:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.262 16:31:40 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:56.262 16:31:40 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:56.262 16:31:40 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:56.262 16:31:40 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:56.262 16:31:40 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:56.262 16:31:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:56.262 16:31:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57338 00:04:56.262 16:31:40 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57338 ']' 00:04:56.262 16:31:40 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57338 00:04:56.262 16:31:40 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:56.262 16:31:40 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.262 16:31:40 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57338 00:04:56.262 killing process with pid 57338 00:04:56.262 16:31:40 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.262 16:31:40 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.262 16:31:40 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57338' 00:04:56.262 16:31:40 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57338 00:04:56.262 16:31:40 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57338 00:04:56.828 00:04:56.828 real 0m6.230s 00:04:56.828 user 0m5.858s 00:04:56.828 sys 0m0.266s 00:04:56.828 16:31:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.828 16:31:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.828 ************************************ 00:04:56.828 END TEST skip_rpc 00:04:56.828 ************************************ 00:04:56.828 16:31:41 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:56.828 16:31:41 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.828 16:31:41 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.828 16:31:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.828 ************************************ 00:04:56.828 START TEST skip_rpc_with_json 00:04:56.828 ************************************ 00:04:56.828 16:31:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:56.828 16:31:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:56.828 16:31:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57431 00:04:56.828 16:31:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.828 16:31:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57431 00:04:56.828 16:31:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57431 ']' 00:04:56.828 16:31:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:56.828 16:31:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.828 16:31:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.828 16:31:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.828 16:31:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.828 16:31:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:56.828 [2024-11-20 16:31:41.697039] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:04:56.828 [2024-11-20 16:31:41.697135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57431 ] 00:04:57.086 [2024-11-20 16:31:41.848159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.086 [2024-11-20 16:31:41.929888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.019 16:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.019 16:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:58.019 16:31:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:58.019 16:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.019 16:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.019 [2024-11-20 16:31:42.545905] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:58.019 request: 00:04:58.019 { 00:04:58.019 "trtype": "tcp", 00:04:58.019 "method": "nvmf_get_transports", 00:04:58.019 "req_id": 1 00:04:58.019 } 00:04:58.019 Got JSON-RPC error response 00:04:58.019 response: 00:04:58.019 { 00:04:58.019 "code": -19, 00:04:58.019 "message": "No such device" 00:04:58.019 } 00:04:58.019 16:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:58.019 16:31:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:58.019 16:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.019 16:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.019 [2024-11-20 16:31:42.557995] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:58.019 16:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.019 16:31:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:58.019 16:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.019 16:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.019 16:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.019 16:31:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:58.019 { 00:04:58.019 "subsystems": [ 00:04:58.019 { 00:04:58.019 "subsystem": "fsdev", 00:04:58.019 "config": [ 00:04:58.019 { 00:04:58.019 "method": "fsdev_set_opts", 00:04:58.019 "params": { 00:04:58.019 "fsdev_io_pool_size": 65535, 00:04:58.019 "fsdev_io_cache_size": 256 00:04:58.019 } 00:04:58.019 } 00:04:58.019 ] 00:04:58.019 }, 00:04:58.019 { 00:04:58.019 "subsystem": "keyring", 00:04:58.019 "config": [] 00:04:58.019 }, 00:04:58.019 { 00:04:58.019 "subsystem": "iobuf", 00:04:58.019 "config": [ 00:04:58.019 { 00:04:58.019 "method": "iobuf_set_options", 00:04:58.019 "params": { 00:04:58.019 "small_pool_count": 8192, 00:04:58.019 "large_pool_count": 1024, 00:04:58.019 "small_bufsize": 8192, 00:04:58.019 "large_bufsize": 135168, 00:04:58.019 "enable_numa": false 00:04:58.019 } 00:04:58.019 } 00:04:58.019 ] 00:04:58.019 }, 00:04:58.019 { 00:04:58.019 "subsystem": "sock", 00:04:58.019 "config": [ 00:04:58.019 { 00:04:58.019 "method": "sock_set_default_impl", 00:04:58.019 "params": { 00:04:58.019 "impl_name": "posix" 00:04:58.019 } 00:04:58.019 }, 00:04:58.019 { 00:04:58.019 "method": "sock_impl_set_options", 00:04:58.019 "params": { 00:04:58.019 "impl_name": "ssl", 00:04:58.019 "recv_buf_size": 4096, 00:04:58.019 "send_buf_size": 4096, 00:04:58.019 "enable_recv_pipe": true, 00:04:58.019 "enable_quickack": false, 00:04:58.019 "enable_placement_id": 0, 00:04:58.019 "enable_zerocopy_send_server": true, 00:04:58.019 "enable_zerocopy_send_client": false, 00:04:58.019 "zerocopy_threshold": 0, 00:04:58.019 "tls_version": 0, 00:04:58.019 "enable_ktls": false 00:04:58.019 } 00:04:58.019 }, 00:04:58.019 { 00:04:58.019 "method": "sock_impl_set_options", 00:04:58.019 "params": { 00:04:58.019 "impl_name": "posix", 00:04:58.019 "recv_buf_size": 2097152, 00:04:58.019 "send_buf_size": 2097152, 00:04:58.019 "enable_recv_pipe": true, 00:04:58.019 "enable_quickack": false, 00:04:58.019 "enable_placement_id": 0, 00:04:58.019 "enable_zerocopy_send_server": true, 00:04:58.019 "enable_zerocopy_send_client": false, 00:04:58.019 "zerocopy_threshold": 0, 00:04:58.019 "tls_version": 0, 00:04:58.019 "enable_ktls": false 00:04:58.019 } 00:04:58.019 } 00:04:58.019 ] 00:04:58.019 }, 00:04:58.019 { 00:04:58.019 "subsystem": "vmd", 00:04:58.019 "config": [] 00:04:58.019 }, 00:04:58.019 { 00:04:58.019 "subsystem": "accel", 00:04:58.019 "config": [ 00:04:58.019 { 00:04:58.019 "method": "accel_set_options", 00:04:58.019 "params": { 00:04:58.019 "small_cache_size": 128, 00:04:58.019 "large_cache_size": 16, 00:04:58.019 "task_count": 2048, 00:04:58.019 "sequence_count": 2048, 00:04:58.019 "buf_count": 2048 00:04:58.019 } 00:04:58.019 } 00:04:58.019 ] 00:04:58.019 }, 00:04:58.019 { 00:04:58.019 "subsystem": "bdev", 00:04:58.019 "config": [ 00:04:58.019 { 00:04:58.019 "method": "bdev_set_options", 00:04:58.019 "params": { 00:04:58.019 "bdev_io_pool_size": 65535, 00:04:58.019 "bdev_io_cache_size": 256, 00:04:58.019 "bdev_auto_examine": true, 00:04:58.019 "iobuf_small_cache_size": 128, 00:04:58.019 "iobuf_large_cache_size": 16 00:04:58.019 } 00:04:58.019 }, 00:04:58.019 { 00:04:58.020 "method": "bdev_raid_set_options", 00:04:58.020 "params": { 00:04:58.020 "process_window_size_kb": 1024, 00:04:58.020 "process_max_bandwidth_mb_sec": 0 00:04:58.020 } 00:04:58.020 }, 00:04:58.020 { 00:04:58.020 "method": "bdev_iscsi_set_options", 00:04:58.020 "params": { 00:04:58.020 "timeout_sec": 30 00:04:58.020 } 00:04:58.020 }, 00:04:58.020 { 00:04:58.020 "method": "bdev_nvme_set_options", 00:04:58.020 "params": { 00:04:58.020 "action_on_timeout": "none", 00:04:58.020 "timeout_us": 0, 00:04:58.020 "timeout_admin_us": 0, 00:04:58.020 "keep_alive_timeout_ms": 10000, 00:04:58.020 "arbitration_burst": 0, 00:04:58.020 "low_priority_weight": 0, 00:04:58.020 "medium_priority_weight": 0, 00:04:58.020 "high_priority_weight": 0, 00:04:58.020 "nvme_adminq_poll_period_us": 10000, 00:04:58.020 "nvme_ioq_poll_period_us": 0, 00:04:58.020 "io_queue_requests": 0, 00:04:58.020 "delay_cmd_submit": true, 00:04:58.020 "transport_retry_count": 4, 00:04:58.020 "bdev_retry_count": 3, 00:04:58.020 "transport_ack_timeout": 0, 00:04:58.020 "ctrlr_loss_timeout_sec": 0, 00:04:58.020 "reconnect_delay_sec": 0, 00:04:58.020 "fast_io_fail_timeout_sec": 0, 00:04:58.020 "disable_auto_failback": false, 00:04:58.020 "generate_uuids": false, 00:04:58.020 "transport_tos": 0, 00:04:58.020 "nvme_error_stat": false, 00:04:58.020 "rdma_srq_size": 0, 00:04:58.020 "io_path_stat": false, 00:04:58.020 "allow_accel_sequence": false, 00:04:58.020 "rdma_max_cq_size": 0, 00:04:58.020 "rdma_cm_event_timeout_ms": 0, 00:04:58.020 "dhchap_digests": [ 00:04:58.020 "sha256", 00:04:58.020 "sha384", 00:04:58.020 "sha512" 00:04:58.020 ], 00:04:58.020 "dhchap_dhgroups": [ 00:04:58.020 "null", 00:04:58.020 "ffdhe2048", 00:04:58.020 "ffdhe3072", 00:04:58.020 "ffdhe4096", 00:04:58.020 "ffdhe6144", 00:04:58.020 "ffdhe8192" 00:04:58.020 ] 00:04:58.020 } 00:04:58.020 }, 00:04:58.020 { 00:04:58.020 "method": "bdev_nvme_set_hotplug", 00:04:58.020 "params": { 00:04:58.020 "period_us": 100000, 00:04:58.020 "enable": false 00:04:58.020 } 00:04:58.020 }, 00:04:58.020 { 00:04:58.020 "method": "bdev_wait_for_examine" 00:04:58.020 } 00:04:58.020 ] 00:04:58.020 }, 00:04:58.020 { 00:04:58.020 "subsystem": "scsi", 00:04:58.020 "config": null 00:04:58.020 }, 00:04:58.020 { 00:04:58.020 "subsystem": "scheduler", 00:04:58.020 "config": [ 00:04:58.020 { 00:04:58.020 "method": "framework_set_scheduler", 00:04:58.020 "params": { 00:04:58.020 "name": "static" 00:04:58.020 } 00:04:58.020 } 00:04:58.020 ] 00:04:58.020 }, 00:04:58.020 { 00:04:58.020 "subsystem": "vhost_scsi", 00:04:58.020 "config": [] 00:04:58.020 }, 00:04:58.020 { 00:04:58.020 "subsystem": "vhost_blk", 00:04:58.020 "config": [] 00:04:58.020 }, 00:04:58.020 { 00:04:58.020 "subsystem": "ublk", 00:04:58.020 "config": [] 00:04:58.020 }, 00:04:58.020 { 00:04:58.020 "subsystem": "nbd", 00:04:58.020 "config": [] 00:04:58.020 }, 00:04:58.020 { 00:04:58.020 "subsystem": "nvmf", 00:04:58.020 "config": [ 00:04:58.020 { 00:04:58.020 "method": "nvmf_set_config", 00:04:58.020 "params": { 00:04:58.020 "discovery_filter": "match_any", 00:04:58.020 "admin_cmd_passthru": { 00:04:58.020 "identify_ctrlr": false 00:04:58.020 }, 00:04:58.020 "dhchap_digests": [ 00:04:58.020 "sha256", 00:04:58.020 "sha384", 00:04:58.020 "sha512" 00:04:58.020 ], 00:04:58.020 "dhchap_dhgroups": [ 00:04:58.020 "null", 00:04:58.020 "ffdhe2048", 00:04:58.020 "ffdhe3072", 00:04:58.020 "ffdhe4096", 00:04:58.020 "ffdhe6144", 00:04:58.020 "ffdhe8192" 00:04:58.020 ] 00:04:58.020 } 00:04:58.020 }, 00:04:58.020 { 00:04:58.020 "method": "nvmf_set_max_subsystems", 00:04:58.020 "params": { 00:04:58.020 "max_subsystems": 1024 00:04:58.020 } 00:04:58.020 }, 00:04:58.020 { 00:04:58.020 "method": "nvmf_set_crdt", 00:04:58.020 "params": { 00:04:58.020 "crdt1": 0, 00:04:58.020 "crdt2": 0, 00:04:58.020 "crdt3": 0 00:04:58.020 } 00:04:58.020 }, 00:04:58.020 { 00:04:58.020 "method": "nvmf_create_transport", 00:04:58.020 "params": { 00:04:58.020 "trtype": "TCP", 00:04:58.020 "max_queue_depth": 128, 00:04:58.020 "max_io_qpairs_per_ctrlr": 127, 00:04:58.020 "in_capsule_data_size": 4096, 00:04:58.020 "max_io_size": 131072, 00:04:58.020 "io_unit_size": 131072, 00:04:58.020 "max_aq_depth": 128, 00:04:58.020 "num_shared_buffers": 511, 00:04:58.020 "buf_cache_size": 4294967295, 00:04:58.020 "dif_insert_or_strip": false, 00:04:58.020 "zcopy": false, 00:04:58.020 "c2h_success": true, 00:04:58.020 "sock_priority": 0, 00:04:58.020 "abort_timeout_sec": 1, 00:04:58.020 "ack_timeout": 0, 00:04:58.020 "data_wr_pool_size": 0 00:04:58.020 } 00:04:58.020 } 00:04:58.020 ] 00:04:58.020 }, 00:04:58.020 { 00:04:58.020 "subsystem": "iscsi", 00:04:58.020 "config": [ 00:04:58.020 { 00:04:58.020 "method": "iscsi_set_options", 00:04:58.020 "params": { 00:04:58.020 "node_base": "iqn.2016-06.io.spdk", 00:04:58.020 "max_sessions": 128, 00:04:58.020 "max_connections_per_session": 2, 00:04:58.020 "max_queue_depth": 64, 00:04:58.020 "default_time2wait": 2, 00:04:58.020 "default_time2retain": 20, 00:04:58.020 "first_burst_length": 8192, 00:04:58.020 "immediate_data": true, 00:04:58.020 "allow_duplicated_isid": false, 00:04:58.020 "error_recovery_level": 0, 00:04:58.020 "nop_timeout": 60, 00:04:58.020 "nop_in_interval": 30, 00:04:58.020 "disable_chap": false, 00:04:58.020 "require_chap": false, 00:04:58.020 "mutual_chap": false, 00:04:58.020 "chap_group": 0, 00:04:58.020 "max_large_datain_per_connection": 64, 00:04:58.020 "max_r2t_per_connection": 4, 00:04:58.020 "pdu_pool_size": 36864, 00:04:58.020 "immediate_data_pool_size": 16384, 00:04:58.020 "data_out_pool_size": 2048 00:04:58.020 } 00:04:58.020 } 00:04:58.020 ] 00:04:58.020 } 00:04:58.020 ] 00:04:58.020 } 00:04:58.020 16:31:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:58.020 16:31:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57431 00:04:58.020 16:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57431 ']' 00:04:58.020 16:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57431 00:04:58.020 16:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:58.020 16:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:58.020 16:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57431 00:04:58.020 killing process with pid 57431 00:04:58.020 16:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:58.020 16:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:58.020 16:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57431' 00:04:58.020 16:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57431 00:04:58.020 16:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57431 00:04:59.392 16:31:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57465 00:04:59.392 16:31:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:59.392 16:31:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:04.656 16:31:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57465 00:05:04.656 16:31:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57465 ']' 00:05:04.656 16:31:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57465 00:05:04.656 16:31:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:04.656 16:31:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.656 16:31:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57465 00:05:04.656 killing process with pid 57465 00:05:04.656 16:31:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:04.656 16:31:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:04.656 16:31:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57465' 00:05:04.656 16:31:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57465 00:05:04.656 16:31:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57465 00:05:05.591 16:31:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:05.591 16:31:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:05.591 ************************************ 00:05:05.591 END TEST skip_rpc_with_json 00:05:05.591 ************************************ 00:05:05.591 00:05:05.591 real 0m8.521s 00:05:05.591 user 0m8.159s 00:05:05.591 sys 0m0.579s 00:05:05.591 16:31:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.591 16:31:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:05.591 16:31:50 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:05.591 16:31:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.591 16:31:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.591 16:31:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.591 ************************************ 00:05:05.591 START TEST skip_rpc_with_delay 00:05:05.591 ************************************ 00:05:05.591 16:31:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:05.591 16:31:50 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:05.591 16:31:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:05.591 16:31:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:05.591 16:31:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:05.591 16:31:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:05.591 16:31:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:05.591 16:31:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:05.591 16:31:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:05.591 16:31:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:05.591 16:31:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:05.591 16:31:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:05.591 16:31:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:05.591 [2024-11-20 16:31:50.298569] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:05.591 16:31:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:05.591 16:31:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:05.591 ************************************ 00:05:05.591 END TEST skip_rpc_with_delay 00:05:05.592 ************************************ 00:05:05.592 16:31:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:05.592 16:31:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:05.592 00:05:05.592 real 0m0.128s 00:05:05.592 user 0m0.062s 00:05:05.592 sys 0m0.062s 00:05:05.592 16:31:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.592 16:31:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:05.592 16:31:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:05.592 16:31:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:05.592 16:31:50 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:05.592 16:31:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.592 16:31:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.592 16:31:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.592 ************************************ 00:05:05.592 START TEST exit_on_failed_rpc_init 00:05:05.592 ************************************ 00:05:05.592 16:31:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:05.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.592 16:31:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57588 00:05:05.592 16:31:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57588 00:05:05.592 16:31:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57588 ']' 00:05:05.592 16:31:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.592 16:31:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.592 16:31:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.592 16:31:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.592 16:31:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.592 16:31:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:05.850 [2024-11-20 16:31:50.488389] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:05:05.850 [2024-11-20 16:31:50.488508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57588 ] 00:05:05.850 [2024-11-20 16:31:50.644569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.850 [2024-11-20 16:31:50.727017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.418 16:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.418 16:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:06.418 16:31:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.418 16:31:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:06.418 16:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:06.418 16:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:06.418 16:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:06.418 16:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:06.418 16:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:06.418 16:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:06.418 16:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:06.418 16:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:06.418 16:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:06.418 16:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:06.418 16:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:06.677 [2024-11-20 16:31:51.363072] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:05:06.677 [2024-11-20 16:31:51.363336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57606 ] 00:05:06.677 [2024-11-20 16:31:51.523610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.935 [2024-11-20 16:31:51.621621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.935 [2024-11-20 16:31:51.621847] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:06.935 [2024-11-20 16:31:51.622222] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:06.935 [2024-11-20 16:31:51.622248] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:06.936 16:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:06.936 16:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:06.936 16:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:06.936 16:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:06.936 16:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:06.936 16:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:06.936 16:31:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:06.936 16:31:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57588 00:05:06.936 16:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57588 ']' 00:05:06.936 16:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57588 00:05:06.936 16:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:06.936 16:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:06.936 16:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57588 00:05:07.194 killing process with pid 57588 00:05:07.194 16:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.194 16:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.194 16:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57588' 00:05:07.194 16:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57588 00:05:07.194 16:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57588 00:05:08.568 00:05:08.568 real 0m2.622s 00:05:08.568 user 0m2.911s 00:05:08.568 sys 0m0.388s 00:05:08.568 ************************************ 00:05:08.568 END TEST exit_on_failed_rpc_init 00:05:08.568 ************************************ 00:05:08.568 16:31:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.568 16:31:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:08.568 16:31:53 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:08.568 00:05:08.568 real 0m17.862s 00:05:08.568 user 0m17.135s 00:05:08.568 sys 0m1.471s 00:05:08.568 16:31:53 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.568 ************************************ 00:05:08.568 END TEST skip_rpc 00:05:08.568 ************************************ 00:05:08.568 16:31:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.568 16:31:53 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:08.568 16:31:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.568 16:31:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.568 16:31:53 -- common/autotest_common.sh@10 -- # set +x 00:05:08.568 ************************************ 00:05:08.568 START TEST rpc_client 00:05:08.568 ************************************ 00:05:08.568 16:31:53 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:08.568 * Looking for test storage... 00:05:08.568 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:08.568 16:31:53 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:08.568 16:31:53 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:08.568 16:31:53 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:08.568 16:31:53 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:08.568 16:31:53 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.568 16:31:53 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.568 16:31:53 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.568 16:31:53 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.568 16:31:53 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.568 16:31:53 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.568 16:31:53 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.568 16:31:53 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.568 16:31:53 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.568 16:31:53 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.568 16:31:53 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.568 16:31:53 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:08.568 16:31:53 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:08.568 16:31:53 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.568 16:31:53 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.568 16:31:53 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:08.568 16:31:53 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:08.568 16:31:53 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.568 16:31:53 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:08.568 16:31:53 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.568 16:31:53 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:08.568 16:31:53 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:08.568 16:31:53 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.568 16:31:53 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:08.568 16:31:53 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.569 16:31:53 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.569 16:31:53 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.569 16:31:53 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:08.569 16:31:53 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.569 16:31:53 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:08.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.569 --rc genhtml_branch_coverage=1 00:05:08.569 --rc genhtml_function_coverage=1 00:05:08.569 --rc genhtml_legend=1 00:05:08.569 --rc geninfo_all_blocks=1 00:05:08.569 --rc geninfo_unexecuted_blocks=1 00:05:08.569 00:05:08.569 ' 00:05:08.569 16:31:53 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:08.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.569 --rc genhtml_branch_coverage=1 00:05:08.569 --rc genhtml_function_coverage=1 00:05:08.569 --rc genhtml_legend=1 00:05:08.569 --rc geninfo_all_blocks=1 00:05:08.569 --rc geninfo_unexecuted_blocks=1 00:05:08.569 00:05:08.569 ' 00:05:08.569 16:31:53 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:08.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.569 --rc genhtml_branch_coverage=1 00:05:08.569 --rc genhtml_function_coverage=1 00:05:08.569 --rc genhtml_legend=1 00:05:08.569 --rc geninfo_all_blocks=1 00:05:08.569 --rc geninfo_unexecuted_blocks=1 00:05:08.569 00:05:08.569 ' 00:05:08.569 16:31:53 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:08.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.569 --rc genhtml_branch_coverage=1 00:05:08.569 --rc genhtml_function_coverage=1 00:05:08.569 --rc genhtml_legend=1 00:05:08.569 --rc geninfo_all_blocks=1 00:05:08.569 --rc geninfo_unexecuted_blocks=1 00:05:08.569 00:05:08.569 ' 00:05:08.569 16:31:53 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:08.569 OK 00:05:08.569 16:31:53 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:08.569 00:05:08.569 real 0m0.190s 00:05:08.569 user 0m0.103s 00:05:08.569 sys 0m0.088s 00:05:08.569 16:31:53 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.569 16:31:53 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:08.569 ************************************ 00:05:08.569 END TEST rpc_client 00:05:08.569 ************************************ 00:05:08.569 16:31:53 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:08.569 16:31:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.569 16:31:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.569 16:31:53 -- common/autotest_common.sh@10 -- # set +x 00:05:08.569 ************************************ 00:05:08.569 START TEST json_config 00:05:08.569 ************************************ 00:05:08.569 16:31:53 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:08.569 16:31:53 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:08.569 16:31:53 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:08.569 16:31:53 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:08.828 16:31:53 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:08.828 16:31:53 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.828 16:31:53 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.828 16:31:53 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.828 16:31:53 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.828 16:31:53 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.828 16:31:53 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.828 16:31:53 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.828 16:31:53 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.828 16:31:53 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.828 16:31:53 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.828 16:31:53 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.829 16:31:53 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:08.829 16:31:53 json_config -- scripts/common.sh@345 -- # : 1 00:05:08.829 16:31:53 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.829 16:31:53 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.829 16:31:53 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:08.829 16:31:53 json_config -- scripts/common.sh@353 -- # local d=1 00:05:08.829 16:31:53 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.829 16:31:53 json_config -- scripts/common.sh@355 -- # echo 1 00:05:08.829 16:31:53 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.829 16:31:53 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:08.829 16:31:53 json_config -- scripts/common.sh@353 -- # local d=2 00:05:08.829 16:31:53 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.829 16:31:53 json_config -- scripts/common.sh@355 -- # echo 2 00:05:08.829 16:31:53 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.829 16:31:53 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.829 16:31:53 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.829 16:31:53 json_config -- scripts/common.sh@368 -- # return 0 00:05:08.829 16:31:53 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.829 16:31:53 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:08.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.829 --rc genhtml_branch_coverage=1 00:05:08.829 --rc genhtml_function_coverage=1 00:05:08.829 --rc genhtml_legend=1 00:05:08.829 --rc geninfo_all_blocks=1 00:05:08.829 --rc geninfo_unexecuted_blocks=1 00:05:08.829 00:05:08.829 ' 00:05:08.829 16:31:53 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:08.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.829 --rc genhtml_branch_coverage=1 00:05:08.829 --rc genhtml_function_coverage=1 00:05:08.829 --rc genhtml_legend=1 00:05:08.829 --rc geninfo_all_blocks=1 00:05:08.829 --rc geninfo_unexecuted_blocks=1 00:05:08.829 00:05:08.829 ' 00:05:08.829 16:31:53 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:08.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.829 --rc genhtml_branch_coverage=1 00:05:08.829 --rc genhtml_function_coverage=1 00:05:08.829 --rc genhtml_legend=1 00:05:08.829 --rc geninfo_all_blocks=1 00:05:08.829 --rc geninfo_unexecuted_blocks=1 00:05:08.829 00:05:08.829 ' 00:05:08.829 16:31:53 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:08.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.829 --rc genhtml_branch_coverage=1 00:05:08.829 --rc genhtml_function_coverage=1 00:05:08.829 --rc genhtml_legend=1 00:05:08.829 --rc geninfo_all_blocks=1 00:05:08.829 --rc geninfo_unexecuted_blocks=1 00:05:08.829 00:05:08.829 ' 00:05:08.829 16:31:53 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:08.829 16:31:53 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:08.829 16:31:53 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:08.829 16:31:53 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:08.829 16:31:53 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:08.829 16:31:53 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:08.829 16:31:53 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:08.829 16:31:53 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:08.829 16:31:53 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:08.829 16:31:53 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:08.829 16:31:53 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:08.829 16:31:53 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:08.829 16:31:53 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14dbd995-d808-4651-988a-ff7c615cd4c8 00:05:08.829 16:31:53 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=14dbd995-d808-4651-988a-ff7c615cd4c8 00:05:08.829 16:31:53 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:08.829 16:31:53 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:08.829 16:31:53 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:08.829 16:31:53 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:08.829 16:31:53 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:08.829 16:31:53 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:08.829 16:31:53 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:08.829 16:31:53 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:08.829 16:31:53 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:08.829 16:31:53 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.829 16:31:53 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.829 16:31:53 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.829 16:31:53 json_config -- paths/export.sh@5 -- # export PATH 00:05:08.829 16:31:53 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.829 16:31:53 json_config -- nvmf/common.sh@51 -- # : 0 00:05:08.829 16:31:53 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:08.829 16:31:53 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:08.829 16:31:53 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:08.829 16:31:53 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:08.829 16:31:53 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:08.829 16:31:53 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:08.829 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:08.829 16:31:53 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:08.829 16:31:53 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:08.829 16:31:53 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:08.829 16:31:53 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:08.829 16:31:53 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:08.829 16:31:53 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:08.829 16:31:53 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:08.829 16:31:53 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:08.829 16:31:53 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:08.829 WARNING: No tests are enabled so not running JSON configuration tests 00:05:08.829 16:31:53 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:08.829 00:05:08.829 real 0m0.139s 00:05:08.829 user 0m0.094s 00:05:08.829 sys 0m0.046s 00:05:08.829 16:31:53 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.829 ************************************ 00:05:08.829 END TEST json_config 00:05:08.829 16:31:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.829 ************************************ 00:05:08.829 16:31:53 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:08.829 16:31:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.829 16:31:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.829 16:31:53 -- common/autotest_common.sh@10 -- # set +x 00:05:08.829 ************************************ 00:05:08.829 START TEST json_config_extra_key 00:05:08.829 ************************************ 00:05:08.829 16:31:53 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:08.829 16:31:53 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:08.829 16:31:53 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:08.829 16:31:53 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:08.829 16:31:53 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:08.829 16:31:53 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.829 16:31:53 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.829 16:31:53 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.829 16:31:53 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.829 16:31:53 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.829 16:31:53 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.829 16:31:53 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.829 16:31:53 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.829 16:31:53 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.829 16:31:53 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.829 16:31:53 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.829 16:31:53 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:08.830 16:31:53 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:08.830 16:31:53 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.830 16:31:53 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.830 16:31:53 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:08.830 16:31:53 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:08.830 16:31:53 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.830 16:31:53 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:08.830 16:31:53 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.830 16:31:53 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:08.830 16:31:53 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:08.830 16:31:53 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.830 16:31:53 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:08.830 16:31:53 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.830 16:31:53 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.830 16:31:53 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.830 16:31:53 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:08.830 16:31:53 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.830 16:31:53 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:08.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.830 --rc genhtml_branch_coverage=1 00:05:08.830 --rc genhtml_function_coverage=1 00:05:08.830 --rc genhtml_legend=1 00:05:08.830 --rc geninfo_all_blocks=1 00:05:08.830 --rc geninfo_unexecuted_blocks=1 00:05:08.830 00:05:08.830 ' 00:05:08.830 16:31:53 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:08.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.830 --rc genhtml_branch_coverage=1 00:05:08.830 --rc genhtml_function_coverage=1 00:05:08.830 --rc genhtml_legend=1 00:05:08.830 --rc geninfo_all_blocks=1 00:05:08.830 --rc geninfo_unexecuted_blocks=1 00:05:08.830 00:05:08.830 ' 00:05:08.830 16:31:53 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:08.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.830 --rc genhtml_branch_coverage=1 00:05:08.830 --rc genhtml_function_coverage=1 00:05:08.830 --rc genhtml_legend=1 00:05:08.830 --rc geninfo_all_blocks=1 00:05:08.830 --rc geninfo_unexecuted_blocks=1 00:05:08.830 00:05:08.830 ' 00:05:08.830 16:31:53 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:08.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.830 --rc genhtml_branch_coverage=1 00:05:08.830 --rc genhtml_function_coverage=1 00:05:08.830 --rc genhtml_legend=1 00:05:08.830 --rc geninfo_all_blocks=1 00:05:08.830 --rc geninfo_unexecuted_blocks=1 00:05:08.830 00:05:08.830 ' 00:05:08.830 16:31:53 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:08.830 16:31:53 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:08.830 16:31:53 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:08.830 16:31:53 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:08.830 16:31:53 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:08.830 16:31:53 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:08.830 16:31:53 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:08.830 16:31:53 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:08.830 16:31:53 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:08.830 16:31:53 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:08.830 16:31:53 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:08.830 16:31:53 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:08.830 16:31:53 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14dbd995-d808-4651-988a-ff7c615cd4c8 00:05:08.830 16:31:53 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=14dbd995-d808-4651-988a-ff7c615cd4c8 00:05:08.830 16:31:53 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:08.830 16:31:53 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:08.830 16:31:53 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:08.830 16:31:53 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:08.830 16:31:53 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:08.830 16:31:53 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:09.089 16:31:53 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:09.089 16:31:53 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:09.089 16:31:53 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:09.089 16:31:53 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.089 16:31:53 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.089 16:31:53 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.089 16:31:53 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:09.089 16:31:53 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.089 16:31:53 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:09.089 16:31:53 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:09.089 16:31:53 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:09.089 16:31:53 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:09.089 16:31:53 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:09.089 16:31:53 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:09.089 16:31:53 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:09.089 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:09.089 16:31:53 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:09.089 16:31:53 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:09.089 16:31:53 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:09.089 16:31:53 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:09.089 16:31:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:09.089 16:31:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:09.089 16:31:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:09.089 16:31:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:09.089 16:31:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:09.089 INFO: launching applications... 00:05:09.089 16:31:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:09.089 16:31:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:09.089 16:31:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:09.089 16:31:53 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:09.089 16:31:53 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:09.089 16:31:53 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:09.089 16:31:53 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:09.089 16:31:53 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:09.089 16:31:53 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:09.089 16:31:53 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:09.089 16:31:53 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:09.089 16:31:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.089 16:31:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.089 16:31:53 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57794 00:05:09.089 16:31:53 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:09.089 Waiting for target to run... 00:05:09.089 16:31:53 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57794 /var/tmp/spdk_tgt.sock 00:05:09.089 16:31:53 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57794 ']' 00:05:09.089 16:31:53 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:09.089 16:31:53 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.089 16:31:53 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:09.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:09.089 16:31:53 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.089 16:31:53 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:09.089 16:31:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:09.089 [2024-11-20 16:31:53.800540] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:05:09.089 [2024-11-20 16:31:53.800659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57794 ] 00:05:09.347 [2024-11-20 16:31:54.116199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.347 [2024-11-20 16:31:54.206844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.919 00:05:09.919 INFO: shutting down applications... 00:05:09.919 16:31:54 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.919 16:31:54 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:09.919 16:31:54 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:09.919 16:31:54 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:09.919 16:31:54 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:09.919 16:31:54 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:09.919 16:31:54 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:09.919 16:31:54 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57794 ]] 00:05:09.919 16:31:54 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57794 00:05:09.919 16:31:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:09.919 16:31:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:09.919 16:31:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57794 00:05:09.919 16:31:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:10.486 16:31:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:10.486 16:31:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:10.486 16:31:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57794 00:05:10.486 16:31:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:11.051 16:31:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:11.051 16:31:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.051 16:31:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57794 00:05:11.051 16:31:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:11.615 16:31:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:11.615 16:31:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.615 16:31:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57794 00:05:11.615 16:31:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:11.874 16:31:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:11.874 16:31:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.874 16:31:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57794 00:05:11.874 SPDK target shutdown done 00:05:11.874 Success 00:05:11.874 16:31:56 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:11.874 16:31:56 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:11.874 16:31:56 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:11.874 16:31:56 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:11.874 16:31:56 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:11.874 ************************************ 00:05:11.874 END TEST json_config_extra_key 00:05:11.874 ************************************ 00:05:11.874 00:05:11.874 real 0m3.160s 00:05:11.874 user 0m2.796s 00:05:11.874 sys 0m0.406s 00:05:11.874 16:31:56 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.874 16:31:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:12.133 16:31:56 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:12.133 16:31:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.133 16:31:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.133 16:31:56 -- common/autotest_common.sh@10 -- # set +x 00:05:12.133 ************************************ 00:05:12.133 START TEST alias_rpc 00:05:12.133 ************************************ 00:05:12.133 16:31:56 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:12.133 * Looking for test storage... 00:05:12.133 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:12.133 16:31:56 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:12.133 16:31:56 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:12.133 16:31:56 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:12.133 16:31:56 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:12.133 16:31:56 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.133 16:31:56 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.133 16:31:56 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.133 16:31:56 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.133 16:31:56 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.133 16:31:56 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.133 16:31:56 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.133 16:31:56 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.133 16:31:56 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.133 16:31:56 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.133 16:31:56 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.133 16:31:56 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:12.133 16:31:56 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:12.133 16:31:56 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.133 16:31:56 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.133 16:31:56 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:12.133 16:31:56 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:12.133 16:31:56 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.133 16:31:56 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:12.133 16:31:56 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.133 16:31:56 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:12.133 16:31:56 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:12.133 16:31:56 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.133 16:31:56 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:12.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.133 16:31:56 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.133 16:31:56 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.133 16:31:56 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.133 16:31:56 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:12.133 16:31:56 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.133 16:31:56 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:12.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.133 --rc genhtml_branch_coverage=1 00:05:12.133 --rc genhtml_function_coverage=1 00:05:12.133 --rc genhtml_legend=1 00:05:12.133 --rc geninfo_all_blocks=1 00:05:12.133 --rc geninfo_unexecuted_blocks=1 00:05:12.133 00:05:12.133 ' 00:05:12.133 16:31:56 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:12.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.133 --rc genhtml_branch_coverage=1 00:05:12.133 --rc genhtml_function_coverage=1 00:05:12.133 --rc genhtml_legend=1 00:05:12.133 --rc geninfo_all_blocks=1 00:05:12.133 --rc geninfo_unexecuted_blocks=1 00:05:12.133 00:05:12.133 ' 00:05:12.133 16:31:56 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:12.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.133 --rc genhtml_branch_coverage=1 00:05:12.133 --rc genhtml_function_coverage=1 00:05:12.133 --rc genhtml_legend=1 00:05:12.133 --rc geninfo_all_blocks=1 00:05:12.133 --rc geninfo_unexecuted_blocks=1 00:05:12.133 00:05:12.133 ' 00:05:12.133 16:31:56 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:12.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.133 --rc genhtml_branch_coverage=1 00:05:12.133 --rc genhtml_function_coverage=1 00:05:12.133 --rc genhtml_legend=1 00:05:12.133 --rc geninfo_all_blocks=1 00:05:12.133 --rc geninfo_unexecuted_blocks=1 00:05:12.133 00:05:12.133 ' 00:05:12.134 16:31:56 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:12.134 16:31:56 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57887 00:05:12.134 16:31:56 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57887 00:05:12.134 16:31:56 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57887 ']' 00:05:12.134 16:31:56 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.134 16:31:56 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.134 16:31:56 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.134 16:31:56 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.134 16:31:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.134 16:31:56 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:12.134 [2024-11-20 16:31:56.989516] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:05:12.134 [2024-11-20 16:31:56.989631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57887 ] 00:05:12.392 [2024-11-20 16:31:57.148486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.392 [2024-11-20 16:31:57.250261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.973 16:31:57 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.973 16:31:57 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:12.973 16:31:57 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:13.265 16:31:58 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57887 00:05:13.265 16:31:58 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57887 ']' 00:05:13.265 16:31:58 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57887 00:05:13.265 16:31:58 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:13.265 16:31:58 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.265 16:31:58 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57887 00:05:13.265 16:31:58 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:13.265 16:31:58 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:13.265 16:31:58 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57887' 00:05:13.265 killing process with pid 57887 00:05:13.265 16:31:58 alias_rpc -- common/autotest_common.sh@973 -- # kill 57887 00:05:13.265 16:31:58 alias_rpc -- common/autotest_common.sh@978 -- # wait 57887 00:05:15.163 00:05:15.163 real 0m2.830s 00:05:15.163 user 0m2.910s 00:05:15.163 sys 0m0.408s 00:05:15.163 ************************************ 00:05:15.163 END TEST alias_rpc 00:05:15.163 ************************************ 00:05:15.163 16:31:59 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.163 16:31:59 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.163 16:31:59 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:15.163 16:31:59 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:15.163 16:31:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.163 16:31:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.163 16:31:59 -- common/autotest_common.sh@10 -- # set +x 00:05:15.163 ************************************ 00:05:15.163 START TEST spdkcli_tcp 00:05:15.163 ************************************ 00:05:15.163 16:31:59 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:15.163 * Looking for test storage... 00:05:15.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:15.163 16:31:59 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:15.163 16:31:59 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:15.163 16:31:59 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:15.163 16:31:59 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:15.163 16:31:59 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.163 16:31:59 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.163 16:31:59 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.163 16:31:59 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.163 16:31:59 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.163 16:31:59 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.163 16:31:59 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.163 16:31:59 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.163 16:31:59 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.163 16:31:59 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.163 16:31:59 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.163 16:31:59 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:15.163 16:31:59 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:15.163 16:31:59 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.163 16:31:59 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.163 16:31:59 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:15.163 16:31:59 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:15.163 16:31:59 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.163 16:31:59 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:15.163 16:31:59 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.163 16:31:59 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:15.163 16:31:59 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:15.163 16:31:59 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.163 16:31:59 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:15.163 16:31:59 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.163 16:31:59 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.163 16:31:59 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.163 16:31:59 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:15.163 16:31:59 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.163 16:31:59 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:15.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.164 --rc genhtml_branch_coverage=1 00:05:15.164 --rc genhtml_function_coverage=1 00:05:15.164 --rc genhtml_legend=1 00:05:15.164 --rc geninfo_all_blocks=1 00:05:15.164 --rc geninfo_unexecuted_blocks=1 00:05:15.164 00:05:15.164 ' 00:05:15.164 16:31:59 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:15.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.164 --rc genhtml_branch_coverage=1 00:05:15.164 --rc genhtml_function_coverage=1 00:05:15.164 --rc genhtml_legend=1 00:05:15.164 --rc geninfo_all_blocks=1 00:05:15.164 --rc geninfo_unexecuted_blocks=1 00:05:15.164 00:05:15.164 ' 00:05:15.164 16:31:59 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:15.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.164 --rc genhtml_branch_coverage=1 00:05:15.164 --rc genhtml_function_coverage=1 00:05:15.164 --rc genhtml_legend=1 00:05:15.164 --rc geninfo_all_blocks=1 00:05:15.164 --rc geninfo_unexecuted_blocks=1 00:05:15.164 00:05:15.164 ' 00:05:15.164 16:31:59 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:15.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.164 --rc genhtml_branch_coverage=1 00:05:15.164 --rc genhtml_function_coverage=1 00:05:15.164 --rc genhtml_legend=1 00:05:15.164 --rc geninfo_all_blocks=1 00:05:15.164 --rc geninfo_unexecuted_blocks=1 00:05:15.164 00:05:15.164 ' 00:05:15.164 16:31:59 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:15.164 16:31:59 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:15.164 16:31:59 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:15.164 16:31:59 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:15.164 16:31:59 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:15.164 16:31:59 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:15.164 16:31:59 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:15.164 16:31:59 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:15.164 16:31:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:15.164 16:31:59 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57983 00:05:15.164 16:31:59 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57983 00:05:15.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.164 16:31:59 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57983 ']' 00:05:15.164 16:31:59 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.164 16:31:59 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.164 16:31:59 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:15.164 16:31:59 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.164 16:31:59 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.164 16:31:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:15.164 [2024-11-20 16:31:59.870600] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:05:15.164 [2024-11-20 16:31:59.870883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57983 ] 00:05:15.164 [2024-11-20 16:32:00.028912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:15.425 [2024-11-20 16:32:00.130699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.425 [2024-11-20 16:32:00.130873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.995 16:32:00 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.995 16:32:00 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:15.995 16:32:00 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58000 00:05:15.995 16:32:00 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:15.995 16:32:00 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:16.254 [ 00:05:16.254 "bdev_malloc_delete", 00:05:16.254 "bdev_malloc_create", 00:05:16.254 "bdev_null_resize", 00:05:16.254 "bdev_null_delete", 00:05:16.254 "bdev_null_create", 00:05:16.254 "bdev_nvme_cuse_unregister", 00:05:16.254 "bdev_nvme_cuse_register", 00:05:16.254 "bdev_opal_new_user", 00:05:16.254 "bdev_opal_set_lock_state", 00:05:16.254 "bdev_opal_delete", 00:05:16.254 "bdev_opal_get_info", 00:05:16.254 "bdev_opal_create", 00:05:16.254 "bdev_nvme_opal_revert", 00:05:16.254 "bdev_nvme_opal_init", 00:05:16.254 "bdev_nvme_send_cmd", 00:05:16.254 "bdev_nvme_set_keys", 00:05:16.254 "bdev_nvme_get_path_iostat", 00:05:16.254 "bdev_nvme_get_mdns_discovery_info", 00:05:16.254 "bdev_nvme_stop_mdns_discovery", 00:05:16.254 "bdev_nvme_start_mdns_discovery", 00:05:16.254 "bdev_nvme_set_multipath_policy", 00:05:16.254 "bdev_nvme_set_preferred_path", 00:05:16.254 "bdev_nvme_get_io_paths", 00:05:16.254 "bdev_nvme_remove_error_injection", 00:05:16.254 "bdev_nvme_add_error_injection", 00:05:16.254 "bdev_nvme_get_discovery_info", 00:05:16.254 "bdev_nvme_stop_discovery", 00:05:16.254 "bdev_nvme_start_discovery", 00:05:16.254 "bdev_nvme_get_controller_health_info", 00:05:16.254 "bdev_nvme_disable_controller", 00:05:16.254 "bdev_nvme_enable_controller", 00:05:16.254 "bdev_nvme_reset_controller", 00:05:16.254 "bdev_nvme_get_transport_statistics", 00:05:16.254 "bdev_nvme_apply_firmware", 00:05:16.254 "bdev_nvme_detach_controller", 00:05:16.254 "bdev_nvme_get_controllers", 00:05:16.254 "bdev_nvme_attach_controller", 00:05:16.254 "bdev_nvme_set_hotplug", 00:05:16.254 "bdev_nvme_set_options", 00:05:16.254 "bdev_passthru_delete", 00:05:16.254 "bdev_passthru_create", 00:05:16.254 "bdev_lvol_set_parent_bdev", 00:05:16.254 "bdev_lvol_set_parent", 00:05:16.255 "bdev_lvol_check_shallow_copy", 00:05:16.255 "bdev_lvol_start_shallow_copy", 00:05:16.255 "bdev_lvol_grow_lvstore", 00:05:16.255 "bdev_lvol_get_lvols", 00:05:16.255 "bdev_lvol_get_lvstores", 00:05:16.255 "bdev_lvol_delete", 00:05:16.255 "bdev_lvol_set_read_only", 00:05:16.255 "bdev_lvol_resize", 00:05:16.255 "bdev_lvol_decouple_parent", 00:05:16.255 "bdev_lvol_inflate", 00:05:16.255 "bdev_lvol_rename", 00:05:16.255 "bdev_lvol_clone_bdev", 00:05:16.255 "bdev_lvol_clone", 00:05:16.255 "bdev_lvol_snapshot", 00:05:16.255 "bdev_lvol_create", 00:05:16.255 "bdev_lvol_delete_lvstore", 00:05:16.255 "bdev_lvol_rename_lvstore", 00:05:16.255 "bdev_lvol_create_lvstore", 00:05:16.255 "bdev_raid_set_options", 00:05:16.255 "bdev_raid_remove_base_bdev", 00:05:16.255 "bdev_raid_add_base_bdev", 00:05:16.255 "bdev_raid_delete", 00:05:16.255 "bdev_raid_create", 00:05:16.255 "bdev_raid_get_bdevs", 00:05:16.255 "bdev_error_inject_error", 00:05:16.255 "bdev_error_delete", 00:05:16.255 "bdev_error_create", 00:05:16.255 "bdev_split_delete", 00:05:16.255 "bdev_split_create", 00:05:16.255 "bdev_delay_delete", 00:05:16.255 "bdev_delay_create", 00:05:16.255 "bdev_delay_update_latency", 00:05:16.255 "bdev_zone_block_delete", 00:05:16.255 "bdev_zone_block_create", 00:05:16.255 "blobfs_create", 00:05:16.255 "blobfs_detect", 00:05:16.255 "blobfs_set_cache_size", 00:05:16.255 "bdev_xnvme_delete", 00:05:16.255 "bdev_xnvme_create", 00:05:16.255 "bdev_aio_delete", 00:05:16.255 "bdev_aio_rescan", 00:05:16.255 "bdev_aio_create", 00:05:16.255 "bdev_ftl_set_property", 00:05:16.255 "bdev_ftl_get_properties", 00:05:16.255 "bdev_ftl_get_stats", 00:05:16.255 "bdev_ftl_unmap", 00:05:16.255 "bdev_ftl_unload", 00:05:16.255 "bdev_ftl_delete", 00:05:16.255 "bdev_ftl_load", 00:05:16.255 "bdev_ftl_create", 00:05:16.255 "bdev_virtio_attach_controller", 00:05:16.255 "bdev_virtio_scsi_get_devices", 00:05:16.255 "bdev_virtio_detach_controller", 00:05:16.255 "bdev_virtio_blk_set_hotplug", 00:05:16.255 "bdev_iscsi_delete", 00:05:16.255 "bdev_iscsi_create", 00:05:16.255 "bdev_iscsi_set_options", 00:05:16.255 "accel_error_inject_error", 00:05:16.255 "ioat_scan_accel_module", 00:05:16.255 "dsa_scan_accel_module", 00:05:16.255 "iaa_scan_accel_module", 00:05:16.255 "keyring_file_remove_key", 00:05:16.255 "keyring_file_add_key", 00:05:16.255 "keyring_linux_set_options", 00:05:16.255 "fsdev_aio_delete", 00:05:16.255 "fsdev_aio_create", 00:05:16.255 "iscsi_get_histogram", 00:05:16.255 "iscsi_enable_histogram", 00:05:16.255 "iscsi_set_options", 00:05:16.255 "iscsi_get_auth_groups", 00:05:16.255 "iscsi_auth_group_remove_secret", 00:05:16.255 "iscsi_auth_group_add_secret", 00:05:16.255 "iscsi_delete_auth_group", 00:05:16.255 "iscsi_create_auth_group", 00:05:16.255 "iscsi_set_discovery_auth", 00:05:16.255 "iscsi_get_options", 00:05:16.255 "iscsi_target_node_request_logout", 00:05:16.255 "iscsi_target_node_set_redirect", 00:05:16.255 "iscsi_target_node_set_auth", 00:05:16.255 "iscsi_target_node_add_lun", 00:05:16.255 "iscsi_get_stats", 00:05:16.255 "iscsi_get_connections", 00:05:16.255 "iscsi_portal_group_set_auth", 00:05:16.255 "iscsi_start_portal_group", 00:05:16.255 "iscsi_delete_portal_group", 00:05:16.255 "iscsi_create_portal_group", 00:05:16.255 "iscsi_get_portal_groups", 00:05:16.255 "iscsi_delete_target_node", 00:05:16.255 "iscsi_target_node_remove_pg_ig_maps", 00:05:16.255 "iscsi_target_node_add_pg_ig_maps", 00:05:16.255 "iscsi_create_target_node", 00:05:16.255 "iscsi_get_target_nodes", 00:05:16.255 "iscsi_delete_initiator_group", 00:05:16.255 "iscsi_initiator_group_remove_initiators", 00:05:16.255 "iscsi_initiator_group_add_initiators", 00:05:16.255 "iscsi_create_initiator_group", 00:05:16.255 "iscsi_get_initiator_groups", 00:05:16.255 "nvmf_set_crdt", 00:05:16.255 "nvmf_set_config", 00:05:16.255 "nvmf_set_max_subsystems", 00:05:16.255 "nvmf_stop_mdns_prr", 00:05:16.255 "nvmf_publish_mdns_prr", 00:05:16.255 "nvmf_subsystem_get_listeners", 00:05:16.255 "nvmf_subsystem_get_qpairs", 00:05:16.255 "nvmf_subsystem_get_controllers", 00:05:16.255 "nvmf_get_stats", 00:05:16.255 "nvmf_get_transports", 00:05:16.255 "nvmf_create_transport", 00:05:16.255 "nvmf_get_targets", 00:05:16.255 "nvmf_delete_target", 00:05:16.255 "nvmf_create_target", 00:05:16.255 "nvmf_subsystem_allow_any_host", 00:05:16.255 "nvmf_subsystem_set_keys", 00:05:16.255 "nvmf_subsystem_remove_host", 00:05:16.255 "nvmf_subsystem_add_host", 00:05:16.255 "nvmf_ns_remove_host", 00:05:16.255 "nvmf_ns_add_host", 00:05:16.255 "nvmf_subsystem_remove_ns", 00:05:16.255 "nvmf_subsystem_set_ns_ana_group", 00:05:16.255 "nvmf_subsystem_add_ns", 00:05:16.255 "nvmf_subsystem_listener_set_ana_state", 00:05:16.255 "nvmf_discovery_get_referrals", 00:05:16.255 "nvmf_discovery_remove_referral", 00:05:16.255 "nvmf_discovery_add_referral", 00:05:16.255 "nvmf_subsystem_remove_listener", 00:05:16.255 "nvmf_subsystem_add_listener", 00:05:16.255 "nvmf_delete_subsystem", 00:05:16.255 "nvmf_create_subsystem", 00:05:16.255 "nvmf_get_subsystems", 00:05:16.255 "env_dpdk_get_mem_stats", 00:05:16.255 "nbd_get_disks", 00:05:16.255 "nbd_stop_disk", 00:05:16.255 "nbd_start_disk", 00:05:16.255 "ublk_recover_disk", 00:05:16.255 "ublk_get_disks", 00:05:16.255 "ublk_stop_disk", 00:05:16.255 "ublk_start_disk", 00:05:16.255 "ublk_destroy_target", 00:05:16.255 "ublk_create_target", 00:05:16.255 "virtio_blk_create_transport", 00:05:16.255 "virtio_blk_get_transports", 00:05:16.255 "vhost_controller_set_coalescing", 00:05:16.255 "vhost_get_controllers", 00:05:16.255 "vhost_delete_controller", 00:05:16.255 "vhost_create_blk_controller", 00:05:16.255 "vhost_scsi_controller_remove_target", 00:05:16.255 "vhost_scsi_controller_add_target", 00:05:16.255 "vhost_start_scsi_controller", 00:05:16.255 "vhost_create_scsi_controller", 00:05:16.255 "thread_set_cpumask", 00:05:16.255 "scheduler_set_options", 00:05:16.255 "framework_get_governor", 00:05:16.255 "framework_get_scheduler", 00:05:16.255 "framework_set_scheduler", 00:05:16.255 "framework_get_reactors", 00:05:16.255 "thread_get_io_channels", 00:05:16.255 "thread_get_pollers", 00:05:16.255 "thread_get_stats", 00:05:16.255 "framework_monitor_context_switch", 00:05:16.255 "spdk_kill_instance", 00:05:16.255 "log_enable_timestamps", 00:05:16.255 "log_get_flags", 00:05:16.255 "log_clear_flag", 00:05:16.255 "log_set_flag", 00:05:16.255 "log_get_level", 00:05:16.255 "log_set_level", 00:05:16.255 "log_get_print_level", 00:05:16.255 "log_set_print_level", 00:05:16.255 "framework_enable_cpumask_locks", 00:05:16.255 "framework_disable_cpumask_locks", 00:05:16.255 "framework_wait_init", 00:05:16.255 "framework_start_init", 00:05:16.255 "scsi_get_devices", 00:05:16.255 "bdev_get_histogram", 00:05:16.255 "bdev_enable_histogram", 00:05:16.255 "bdev_set_qos_limit", 00:05:16.255 "bdev_set_qd_sampling_period", 00:05:16.255 "bdev_get_bdevs", 00:05:16.255 "bdev_reset_iostat", 00:05:16.255 "bdev_get_iostat", 00:05:16.255 "bdev_examine", 00:05:16.255 "bdev_wait_for_examine", 00:05:16.255 "bdev_set_options", 00:05:16.255 "accel_get_stats", 00:05:16.255 "accel_set_options", 00:05:16.255 "accel_set_driver", 00:05:16.255 "accel_crypto_key_destroy", 00:05:16.255 "accel_crypto_keys_get", 00:05:16.255 "accel_crypto_key_create", 00:05:16.255 "accel_assign_opc", 00:05:16.255 "accel_get_module_info", 00:05:16.255 "accel_get_opc_assignments", 00:05:16.255 "vmd_rescan", 00:05:16.255 "vmd_remove_device", 00:05:16.255 "vmd_enable", 00:05:16.255 "sock_get_default_impl", 00:05:16.255 "sock_set_default_impl", 00:05:16.255 "sock_impl_set_options", 00:05:16.255 "sock_impl_get_options", 00:05:16.255 "iobuf_get_stats", 00:05:16.255 "iobuf_set_options", 00:05:16.255 "keyring_get_keys", 00:05:16.255 "framework_get_pci_devices", 00:05:16.255 "framework_get_config", 00:05:16.255 "framework_get_subsystems", 00:05:16.255 "fsdev_set_opts", 00:05:16.255 "fsdev_get_opts", 00:05:16.255 "trace_get_info", 00:05:16.255 "trace_get_tpoint_group_mask", 00:05:16.255 "trace_disable_tpoint_group", 00:05:16.255 "trace_enable_tpoint_group", 00:05:16.255 "trace_clear_tpoint_mask", 00:05:16.255 "trace_set_tpoint_mask", 00:05:16.255 "notify_get_notifications", 00:05:16.255 "notify_get_types", 00:05:16.255 "spdk_get_version", 00:05:16.255 "rpc_get_methods" 00:05:16.255 ] 00:05:16.255 16:32:00 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:16.255 16:32:00 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:16.255 16:32:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:16.255 16:32:00 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:16.255 16:32:00 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57983 00:05:16.255 16:32:00 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57983 ']' 00:05:16.255 16:32:00 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57983 00:05:16.255 16:32:00 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:16.255 16:32:00 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.255 16:32:00 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57983 00:05:16.255 16:32:00 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:16.255 16:32:00 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:16.255 16:32:00 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57983' 00:05:16.255 killing process with pid 57983 00:05:16.256 16:32:00 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57983 00:05:16.256 16:32:00 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57983 00:05:17.639 00:05:17.639 real 0m2.834s 00:05:17.639 user 0m5.085s 00:05:17.639 sys 0m0.436s 00:05:17.639 ************************************ 00:05:17.639 END TEST spdkcli_tcp 00:05:17.639 ************************************ 00:05:17.639 16:32:02 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.639 16:32:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:17.639 16:32:02 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:17.639 16:32:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.639 16:32:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.639 16:32:02 -- common/autotest_common.sh@10 -- # set +x 00:05:17.639 ************************************ 00:05:17.639 START TEST dpdk_mem_utility 00:05:17.639 ************************************ 00:05:17.639 16:32:02 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:17.898 * Looking for test storage... 00:05:17.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:17.898 16:32:02 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:17.898 16:32:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:17.898 16:32:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:17.898 16:32:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:17.898 16:32:02 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.898 16:32:02 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.898 16:32:02 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.898 16:32:02 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.898 16:32:02 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.898 16:32:02 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.898 16:32:02 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.898 16:32:02 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.898 16:32:02 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.898 16:32:02 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.898 16:32:02 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.898 16:32:02 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:17.898 16:32:02 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:17.898 16:32:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.898 16:32:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.898 16:32:02 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:17.898 16:32:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:17.898 16:32:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.898 16:32:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:17.898 16:32:02 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.898 16:32:02 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:17.898 16:32:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:17.898 16:32:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.898 16:32:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:17.898 16:32:02 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.898 16:32:02 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.898 16:32:02 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.898 16:32:02 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:17.898 16:32:02 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.898 16:32:02 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:17.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.898 --rc genhtml_branch_coverage=1 00:05:17.898 --rc genhtml_function_coverage=1 00:05:17.898 --rc genhtml_legend=1 00:05:17.899 --rc geninfo_all_blocks=1 00:05:17.899 --rc geninfo_unexecuted_blocks=1 00:05:17.899 00:05:17.899 ' 00:05:17.899 16:32:02 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:17.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.899 --rc genhtml_branch_coverage=1 00:05:17.899 --rc genhtml_function_coverage=1 00:05:17.899 --rc genhtml_legend=1 00:05:17.899 --rc geninfo_all_blocks=1 00:05:17.899 --rc geninfo_unexecuted_blocks=1 00:05:17.899 00:05:17.899 ' 00:05:17.899 16:32:02 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:17.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.899 --rc genhtml_branch_coverage=1 00:05:17.899 --rc genhtml_function_coverage=1 00:05:17.899 --rc genhtml_legend=1 00:05:17.899 --rc geninfo_all_blocks=1 00:05:17.899 --rc geninfo_unexecuted_blocks=1 00:05:17.899 00:05:17.899 ' 00:05:17.899 16:32:02 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:17.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.899 --rc genhtml_branch_coverage=1 00:05:17.899 --rc genhtml_function_coverage=1 00:05:17.899 --rc genhtml_legend=1 00:05:17.899 --rc geninfo_all_blocks=1 00:05:17.899 --rc geninfo_unexecuted_blocks=1 00:05:17.899 00:05:17.899 ' 00:05:17.899 16:32:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:17.899 16:32:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58094 00:05:17.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.899 16:32:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58094 00:05:17.899 16:32:02 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58094 ']' 00:05:17.899 16:32:02 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.899 16:32:02 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.899 16:32:02 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.899 16:32:02 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.899 16:32:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:17.899 16:32:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:17.899 [2024-11-20 16:32:02.732689] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:05:17.899 [2024-11-20 16:32:02.732954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58094 ] 00:05:18.159 [2024-11-20 16:32:02.891712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.159 [2024-11-20 16:32:02.997683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.726 16:32:03 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.726 16:32:03 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:18.726 16:32:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:18.726 16:32:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:18.726 16:32:03 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.726 16:32:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:18.985 { 00:05:18.985 "filename": "/tmp/spdk_mem_dump.txt" 00:05:18.985 } 00:05:18.985 16:32:03 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.985 16:32:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:18.985 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:18.985 1 heaps totaling size 816.000000 MiB 00:05:18.985 size: 816.000000 MiB heap id: 0 00:05:18.985 end heaps---------- 00:05:18.985 9 mempools totaling size 595.772034 MiB 00:05:18.985 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:18.985 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:18.985 size: 92.545471 MiB name: bdev_io_58094 00:05:18.985 size: 50.003479 MiB name: msgpool_58094 00:05:18.985 size: 36.509338 MiB name: fsdev_io_58094 00:05:18.985 size: 21.763794 MiB name: PDU_Pool 00:05:18.985 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:18.985 size: 4.133484 MiB name: evtpool_58094 00:05:18.985 size: 0.026123 MiB name: Session_Pool 00:05:18.985 end mempools------- 00:05:18.985 6 memzones totaling size 4.142822 MiB 00:05:18.985 size: 1.000366 MiB name: RG_ring_0_58094 00:05:18.985 size: 1.000366 MiB name: RG_ring_1_58094 00:05:18.985 size: 1.000366 MiB name: RG_ring_4_58094 00:05:18.985 size: 1.000366 MiB name: RG_ring_5_58094 00:05:18.985 size: 0.125366 MiB name: RG_ring_2_58094 00:05:18.985 size: 0.015991 MiB name: RG_ring_3_58094 00:05:18.985 end memzones------- 00:05:18.985 16:32:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:18.985 heap id: 0 total size: 816.000000 MiB number of busy elements: 320 number of free elements: 18 00:05:18.985 list of free elements. size: 16.790161 MiB 00:05:18.985 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:18.985 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:18.985 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:18.985 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:18.985 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:18.985 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:18.985 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:18.985 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:18.985 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:18.985 element at address: 0x200019500040 with size: 0.936401 MiB 00:05:18.985 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:18.985 element at address: 0x20001ac00000 with size: 0.559509 MiB 00:05:18.985 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:18.985 element at address: 0x200018e00000 with size: 0.487976 MiB 00:05:18.985 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:18.985 element at address: 0x200012c00000 with size: 0.443237 MiB 00:05:18.985 element at address: 0x200028000000 with size: 0.391663 MiB 00:05:18.985 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:18.985 list of standard malloc elements. size: 199.288940 MiB 00:05:18.985 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:18.985 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:18.985 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:18.985 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:18.985 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:18.985 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:18.985 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:18.985 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:18.985 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:18.985 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:05:18.985 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:18.985 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:18.985 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:18.985 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:18.985 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:18.985 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:18.985 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:18.985 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:18.985 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:18.985 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:18.985 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:18.985 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:18.985 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:18.985 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:18.985 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:18.985 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:18.985 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:18.985 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:18.985 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:18.986 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:18.986 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:18.986 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:18.986 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:18.986 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:18.986 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:18.986 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:18.986 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:18.986 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:18.986 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:18.986 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:18.986 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:18.986 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200012c71780 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200012c71880 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200012c71980 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:18.986 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:18.986 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac8f3c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac8f4c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac8f5c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:05:18.986 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:18.987 element at address: 0x200028064440 with size: 0.000244 MiB 00:05:18.987 element at address: 0x200028064540 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806b200 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:18.987 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:18.987 list of memzone associated elements. size: 599.920898 MiB 00:05:18.987 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:18.987 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:18.987 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:18.987 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:18.987 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:18.987 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58094_0 00:05:18.987 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:18.987 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58094_0 00:05:18.987 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:18.987 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58094_0 00:05:18.987 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:18.987 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:18.987 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:18.987 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:18.987 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:18.988 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58094_0 00:05:18.988 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:18.988 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58094 00:05:18.988 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:18.988 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58094 00:05:18.988 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:18.988 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:18.988 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:18.988 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:18.988 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:18.988 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:18.988 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:18.988 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:18.988 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:18.988 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58094 00:05:18.988 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:18.988 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58094 00:05:18.988 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:18.988 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58094 00:05:18.988 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:18.988 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58094 00:05:18.988 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:18.988 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58094 00:05:18.988 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:18.988 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58094 00:05:18.988 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:18.988 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:18.988 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:18.988 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:18.988 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:18.988 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:18.988 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:18.988 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58094 00:05:18.988 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:18.988 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58094 00:05:18.988 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:18.988 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:18.988 element at address: 0x200028064640 with size: 0.023804 MiB 00:05:18.988 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:18.988 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:18.988 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58094 00:05:18.988 element at address: 0x20002806a7c0 with size: 0.002502 MiB 00:05:18.988 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:18.988 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:18.988 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58094 00:05:18.988 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:18.988 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58094 00:05:18.988 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:18.988 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58094 00:05:18.988 element at address: 0x20002806b300 with size: 0.000366 MiB 00:05:18.988 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:18.988 16:32:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:18.988 16:32:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58094 00:05:18.988 16:32:03 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58094 ']' 00:05:18.988 16:32:03 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58094 00:05:18.988 16:32:03 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:18.988 16:32:03 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.988 16:32:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58094 00:05:18.988 killing process with pid 58094 00:05:18.988 16:32:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.988 16:32:03 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.988 16:32:03 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58094' 00:05:18.988 16:32:03 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58094 00:05:18.988 16:32:03 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58094 00:05:20.362 ************************************ 00:05:20.362 END TEST dpdk_mem_utility 00:05:20.362 ************************************ 00:05:20.362 00:05:20.362 real 0m2.656s 00:05:20.362 user 0m2.674s 00:05:20.362 sys 0m0.396s 00:05:20.362 16:32:05 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.362 16:32:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:20.362 16:32:05 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:20.362 16:32:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.362 16:32:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.362 16:32:05 -- common/autotest_common.sh@10 -- # set +x 00:05:20.362 ************************************ 00:05:20.362 START TEST event 00:05:20.362 ************************************ 00:05:20.362 16:32:05 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:20.620 * Looking for test storage... 00:05:20.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:20.620 16:32:05 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:20.620 16:32:05 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:20.620 16:32:05 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:20.620 16:32:05 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:20.620 16:32:05 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.620 16:32:05 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.620 16:32:05 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.620 16:32:05 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.620 16:32:05 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.620 16:32:05 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.620 16:32:05 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.620 16:32:05 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.620 16:32:05 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.620 16:32:05 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.620 16:32:05 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.620 16:32:05 event -- scripts/common.sh@344 -- # case "$op" in 00:05:20.620 16:32:05 event -- scripts/common.sh@345 -- # : 1 00:05:20.620 16:32:05 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.620 16:32:05 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.620 16:32:05 event -- scripts/common.sh@365 -- # decimal 1 00:05:20.620 16:32:05 event -- scripts/common.sh@353 -- # local d=1 00:05:20.620 16:32:05 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.620 16:32:05 event -- scripts/common.sh@355 -- # echo 1 00:05:20.620 16:32:05 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.620 16:32:05 event -- scripts/common.sh@366 -- # decimal 2 00:05:20.620 16:32:05 event -- scripts/common.sh@353 -- # local d=2 00:05:20.620 16:32:05 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.620 16:32:05 event -- scripts/common.sh@355 -- # echo 2 00:05:20.620 16:32:05 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.620 16:32:05 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.620 16:32:05 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.620 16:32:05 event -- scripts/common.sh@368 -- # return 0 00:05:20.620 16:32:05 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.620 16:32:05 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:20.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.620 --rc genhtml_branch_coverage=1 00:05:20.620 --rc genhtml_function_coverage=1 00:05:20.620 --rc genhtml_legend=1 00:05:20.620 --rc geninfo_all_blocks=1 00:05:20.620 --rc geninfo_unexecuted_blocks=1 00:05:20.620 00:05:20.620 ' 00:05:20.620 16:32:05 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:20.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.620 --rc genhtml_branch_coverage=1 00:05:20.620 --rc genhtml_function_coverage=1 00:05:20.620 --rc genhtml_legend=1 00:05:20.620 --rc geninfo_all_blocks=1 00:05:20.620 --rc geninfo_unexecuted_blocks=1 00:05:20.620 00:05:20.620 ' 00:05:20.620 16:32:05 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:20.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.620 --rc genhtml_branch_coverage=1 00:05:20.620 --rc genhtml_function_coverage=1 00:05:20.620 --rc genhtml_legend=1 00:05:20.620 --rc geninfo_all_blocks=1 00:05:20.620 --rc geninfo_unexecuted_blocks=1 00:05:20.620 00:05:20.620 ' 00:05:20.620 16:32:05 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:20.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.620 --rc genhtml_branch_coverage=1 00:05:20.620 --rc genhtml_function_coverage=1 00:05:20.620 --rc genhtml_legend=1 00:05:20.620 --rc geninfo_all_blocks=1 00:05:20.620 --rc geninfo_unexecuted_blocks=1 00:05:20.620 00:05:20.620 ' 00:05:20.620 16:32:05 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:20.620 16:32:05 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:20.620 16:32:05 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:20.620 16:32:05 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:20.620 16:32:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.620 16:32:05 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.620 ************************************ 00:05:20.620 START TEST event_perf 00:05:20.620 ************************************ 00:05:20.620 16:32:05 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:20.620 Running I/O for 1 seconds...[2024-11-20 16:32:05.399683] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:05:20.620 [2024-11-20 16:32:05.399854] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58185 ] 00:05:20.877 [2024-11-20 16:32:05.551787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:20.877 [2024-11-20 16:32:05.633292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.877 [2024-11-20 16:32:05.633435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.877 [2024-11-20 16:32:05.633404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.877 Running I/O for 1 seconds...[2024-11-20 16:32:05.633469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:22.251 00:05:22.251 lcore 0: 205847 00:05:22.251 lcore 1: 205849 00:05:22.251 lcore 2: 205849 00:05:22.251 lcore 3: 205848 00:05:22.251 done. 00:05:22.251 00:05:22.251 real 0m1.399s 00:05:22.251 user 0m4.204s 00:05:22.251 sys 0m0.076s 00:05:22.251 16:32:06 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.251 16:32:06 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:22.251 ************************************ 00:05:22.251 END TEST event_perf 00:05:22.251 ************************************ 00:05:22.251 16:32:06 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:22.251 16:32:06 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:22.251 16:32:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.251 16:32:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.251 ************************************ 00:05:22.251 START TEST event_reactor 00:05:22.251 ************************************ 00:05:22.251 16:32:06 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:22.251 [2024-11-20 16:32:06.844453] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:05:22.251 [2024-11-20 16:32:06.844553] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58225 ] 00:05:22.251 [2024-11-20 16:32:06.998278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.251 [2024-11-20 16:32:07.074742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.644 test_start 00:05:23.644 oneshot 00:05:23.644 tick 100 00:05:23.644 tick 100 00:05:23.644 tick 250 00:05:23.644 tick 100 00:05:23.644 tick 100 00:05:23.644 tick 100 00:05:23.644 tick 250 00:05:23.644 tick 500 00:05:23.644 tick 100 00:05:23.644 tick 100 00:05:23.644 tick 250 00:05:23.644 tick 100 00:05:23.644 tick 100 00:05:23.644 test_end 00:05:23.644 00:05:23.644 real 0m1.379s 00:05:23.644 user 0m1.209s 00:05:23.644 sys 0m0.062s 00:05:23.644 16:32:08 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.644 16:32:08 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:23.644 ************************************ 00:05:23.644 END TEST event_reactor 00:05:23.644 ************************************ 00:05:23.644 16:32:08 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:23.644 16:32:08 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:23.644 16:32:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.644 16:32:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.644 ************************************ 00:05:23.644 START TEST event_reactor_perf 00:05:23.644 ************************************ 00:05:23.644 16:32:08 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:23.644 [2024-11-20 16:32:08.264255] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:05:23.644 [2024-11-20 16:32:08.264362] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58256 ] 00:05:23.644 [2024-11-20 16:32:08.416504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.644 [2024-11-20 16:32:08.496403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.025 test_start 00:05:25.025 test_end 00:05:25.025 Performance: 407155 events per second 00:05:25.025 00:05:25.025 real 0m1.386s 00:05:25.025 user 0m1.207s 00:05:25.025 sys 0m0.072s 00:05:25.025 16:32:09 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.025 16:32:09 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:25.025 ************************************ 00:05:25.025 END TEST event_reactor_perf 00:05:25.025 ************************************ 00:05:25.025 16:32:09 event -- event/event.sh@49 -- # uname -s 00:05:25.025 16:32:09 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:25.025 16:32:09 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:25.025 16:32:09 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.026 16:32:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.026 16:32:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.026 ************************************ 00:05:25.026 START TEST event_scheduler 00:05:25.026 ************************************ 00:05:25.026 16:32:09 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:25.026 * Looking for test storage... 00:05:25.026 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:25.026 16:32:09 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:25.026 16:32:09 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:25.026 16:32:09 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:25.026 16:32:09 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:25.026 16:32:09 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.026 16:32:09 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.026 16:32:09 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.026 16:32:09 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.026 16:32:09 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.026 16:32:09 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.026 16:32:09 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.026 16:32:09 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.026 16:32:09 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.026 16:32:09 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.026 16:32:09 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.026 16:32:09 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:25.026 16:32:09 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:25.026 16:32:09 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.026 16:32:09 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.026 16:32:09 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:25.026 16:32:09 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:25.026 16:32:09 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.026 16:32:09 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:25.026 16:32:09 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.026 16:32:09 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:25.026 16:32:09 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:25.026 16:32:09 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.026 16:32:09 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:25.026 16:32:09 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.026 16:32:09 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.026 16:32:09 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.026 16:32:09 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:25.026 16:32:09 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.026 16:32:09 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:25.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.026 --rc genhtml_branch_coverage=1 00:05:25.026 --rc genhtml_function_coverage=1 00:05:25.026 --rc genhtml_legend=1 00:05:25.026 --rc geninfo_all_blocks=1 00:05:25.026 --rc geninfo_unexecuted_blocks=1 00:05:25.026 00:05:25.026 ' 00:05:25.026 16:32:09 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:25.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.026 --rc genhtml_branch_coverage=1 00:05:25.026 --rc genhtml_function_coverage=1 00:05:25.026 --rc genhtml_legend=1 00:05:25.026 --rc geninfo_all_blocks=1 00:05:25.026 --rc geninfo_unexecuted_blocks=1 00:05:25.026 00:05:25.026 ' 00:05:25.026 16:32:09 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:25.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.026 --rc genhtml_branch_coverage=1 00:05:25.026 --rc genhtml_function_coverage=1 00:05:25.026 --rc genhtml_legend=1 00:05:25.026 --rc geninfo_all_blocks=1 00:05:25.026 --rc geninfo_unexecuted_blocks=1 00:05:25.026 00:05:25.026 ' 00:05:25.026 16:32:09 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:25.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.026 --rc genhtml_branch_coverage=1 00:05:25.026 --rc genhtml_function_coverage=1 00:05:25.026 --rc genhtml_legend=1 00:05:25.026 --rc geninfo_all_blocks=1 00:05:25.026 --rc geninfo_unexecuted_blocks=1 00:05:25.026 00:05:25.026 ' 00:05:25.026 16:32:09 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:25.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.026 16:32:09 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58332 00:05:25.026 16:32:09 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.026 16:32:09 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58332 00:05:25.026 16:32:09 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58332 ']' 00:05:25.026 16:32:09 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:25.026 16:32:09 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.026 16:32:09 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.026 16:32:09 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.026 16:32:09 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.026 16:32:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.026 [2024-11-20 16:32:09.860714] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:05:25.026 [2024-11-20 16:32:09.860808] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58332 ] 00:05:25.284 [2024-11-20 16:32:10.009824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:25.284 [2024-11-20 16:32:10.112829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.284 [2024-11-20 16:32:10.113146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.284 [2024-11-20 16:32:10.113331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.284 [2024-11-20 16:32:10.113348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:25.849 16:32:10 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.849 16:32:10 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:25.849 16:32:10 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:25.849 16:32:10 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.849 16:32:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.849 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:25.849 POWER: Cannot set governor of lcore 0 to userspace 00:05:25.849 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:25.849 POWER: Cannot set governor of lcore 0 to performance 00:05:25.849 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:25.849 POWER: Cannot set governor of lcore 0 to userspace 00:05:25.849 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:25.849 POWER: Cannot set governor of lcore 0 to userspace 00:05:25.849 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:25.849 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:25.849 POWER: Unable to set Power Management Environment for lcore 0 00:05:25.849 [2024-11-20 16:32:10.670717] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:25.849 [2024-11-20 16:32:10.670738] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:25.849 [2024-11-20 16:32:10.670747] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:25.849 [2024-11-20 16:32:10.670762] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:25.849 [2024-11-20 16:32:10.670770] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:25.849 [2024-11-20 16:32:10.670778] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:25.849 16:32:10 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.849 16:32:10 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:25.849 16:32:10 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.849 16:32:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:26.107 [2024-11-20 16:32:10.895882] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:26.107 16:32:10 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.107 16:32:10 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:26.107 16:32:10 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.107 16:32:10 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.107 16:32:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:26.107 ************************************ 00:05:26.107 START TEST scheduler_create_thread 00:05:26.107 ************************************ 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.107 2 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.107 3 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.107 4 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.107 5 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.107 6 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.107 7 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.107 8 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.107 9 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.107 10 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.107 16:32:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:26.108 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.108 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.365 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.365 16:32:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:26.366 16:32:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:26.366 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.366 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.366 16:32:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.366 16:32:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:26.366 16:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.366 16:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.366 16:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.366 16:32:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:26.366 16:32:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:26.366 16:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.366 16:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.624 16:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.624 00:05:26.624 real 0m0.594s 00:05:26.624 user 0m0.013s 00:05:26.624 sys 0m0.006s 00:05:26.624 16:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.624 16:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.624 ************************************ 00:05:26.624 END TEST scheduler_create_thread 00:05:26.624 ************************************ 00:05:26.883 16:32:11 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:26.883 16:32:11 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58332 00:05:26.883 16:32:11 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58332 ']' 00:05:26.883 16:32:11 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58332 00:05:26.883 16:32:11 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:26.883 16:32:11 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:26.883 16:32:11 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58332 00:05:26.883 16:32:11 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:26.883 16:32:11 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:26.883 16:32:11 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58332' 00:05:26.883 killing process with pid 58332 00:05:26.883 16:32:11 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58332 00:05:26.883 16:32:11 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58332 00:05:27.140 [2024-11-20 16:32:11.980920] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:28.075 00:05:28.075 real 0m3.050s 00:05:28.075 user 0m5.857s 00:05:28.075 sys 0m0.340s 00:05:28.075 16:32:12 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.075 16:32:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:28.075 ************************************ 00:05:28.075 END TEST event_scheduler 00:05:28.075 ************************************ 00:05:28.075 16:32:12 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:28.075 16:32:12 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:28.075 16:32:12 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.075 16:32:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.075 16:32:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.075 ************************************ 00:05:28.075 START TEST app_repeat 00:05:28.075 ************************************ 00:05:28.075 16:32:12 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:28.075 16:32:12 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.075 16:32:12 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.075 16:32:12 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:28.075 16:32:12 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.075 16:32:12 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:28.075 16:32:12 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:28.075 16:32:12 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:28.075 16:32:12 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58416 00:05:28.075 16:32:12 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.075 Process app_repeat pid: 58416 00:05:28.075 16:32:12 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58416' 00:05:28.075 16:32:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:28.075 16:32:12 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:28.075 spdk_app_start Round 0 00:05:28.075 16:32:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:28.075 16:32:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58416 /var/tmp/spdk-nbd.sock 00:05:28.075 16:32:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58416 ']' 00:05:28.075 16:32:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.075 16:32:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.075 16:32:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.075 16:32:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.075 16:32:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.075 [2024-11-20 16:32:12.796602] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:05:28.075 [2024-11-20 16:32:12.796719] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58416 ] 00:05:28.075 [2024-11-20 16:32:12.956590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.355 [2024-11-20 16:32:13.053241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.355 [2024-11-20 16:32:13.053336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.983 16:32:13 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.983 16:32:13 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:28.983 16:32:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.240 Malloc0 00:05:29.240 16:32:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.240 Malloc1 00:05:29.498 16:32:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.498 16:32:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.498 16:32:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.498 16:32:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:29.498 16:32:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.498 16:32:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:29.498 16:32:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.498 16:32:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.498 16:32:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.498 16:32:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:29.498 16:32:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.498 16:32:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:29.498 16:32:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:29.498 16:32:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:29.498 16:32:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.498 16:32:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:29.498 /dev/nbd0 00:05:29.498 16:32:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:29.755 16:32:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:29.755 16:32:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:29.755 16:32:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:29.755 16:32:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:29.755 16:32:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:29.755 16:32:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:29.755 16:32:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:29.755 16:32:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:29.755 16:32:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:29.755 16:32:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.755 1+0 records in 00:05:29.755 1+0 records out 00:05:29.755 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027769 s, 14.8 MB/s 00:05:29.755 16:32:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:29.755 16:32:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:29.755 16:32:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:29.755 16:32:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:29.755 16:32:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:29.755 16:32:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.755 16:32:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.755 16:32:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:29.755 /dev/nbd1 00:05:29.755 16:32:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:29.755 16:32:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:29.755 16:32:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:29.755 16:32:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:29.755 16:32:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:29.755 16:32:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:29.755 16:32:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:29.755 16:32:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:29.755 16:32:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:29.755 16:32:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:29.755 16:32:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.755 1+0 records in 00:05:29.755 1+0 records out 00:05:29.755 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000158795 s, 25.8 MB/s 00:05:29.755 16:32:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:29.755 16:32:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:29.755 16:32:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:29.755 16:32:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:29.755 16:32:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:29.755 16:32:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.755 16:32:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.755 16:32:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.755 16:32:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.755 16:32:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.057 16:32:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:30.057 { 00:05:30.057 "nbd_device": "/dev/nbd0", 00:05:30.057 "bdev_name": "Malloc0" 00:05:30.057 }, 00:05:30.057 { 00:05:30.057 "nbd_device": "/dev/nbd1", 00:05:30.057 "bdev_name": "Malloc1" 00:05:30.057 } 00:05:30.057 ]' 00:05:30.057 16:32:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:30.057 { 00:05:30.057 "nbd_device": "/dev/nbd0", 00:05:30.057 "bdev_name": "Malloc0" 00:05:30.057 }, 00:05:30.057 { 00:05:30.057 "nbd_device": "/dev/nbd1", 00:05:30.057 "bdev_name": "Malloc1" 00:05:30.057 } 00:05:30.057 ]' 00:05:30.057 16:32:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.057 16:32:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:30.057 /dev/nbd1' 00:05:30.057 16:32:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:30.057 /dev/nbd1' 00:05:30.057 16:32:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.057 16:32:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:30.057 16:32:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:30.057 16:32:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:30.057 16:32:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:30.057 16:32:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:30.058 16:32:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.058 16:32:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.058 16:32:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:30.058 16:32:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.058 16:32:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:30.058 16:32:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:30.058 256+0 records in 00:05:30.058 256+0 records out 00:05:30.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00648952 s, 162 MB/s 00:05:30.058 16:32:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.058 16:32:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:30.058 256+0 records in 00:05:30.058 256+0 records out 00:05:30.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192217 s, 54.6 MB/s 00:05:30.058 16:32:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.058 16:32:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:30.058 256+0 records in 00:05:30.058 256+0 records out 00:05:30.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0171779 s, 61.0 MB/s 00:05:30.058 16:32:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:30.058 16:32:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.058 16:32:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.058 16:32:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:30.058 16:32:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.058 16:32:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:30.058 16:32:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:30.058 16:32:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.058 16:32:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:30.058 16:32:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.058 16:32:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:30.058 16:32:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.058 16:32:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:30.058 16:32:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.058 16:32:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.058 16:32:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:30.058 16:32:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:30.058 16:32:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.058 16:32:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:30.316 16:32:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:30.316 16:32:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:30.316 16:32:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:30.316 16:32:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.316 16:32:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.316 16:32:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:30.316 16:32:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:30.316 16:32:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.316 16:32:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.316 16:32:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:30.573 16:32:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:30.573 16:32:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:30.573 16:32:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:30.573 16:32:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.573 16:32:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.573 16:32:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:30.573 16:32:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:30.573 16:32:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.573 16:32:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.573 16:32:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.573 16:32:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.830 16:32:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:30.830 16:32:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:30.830 16:32:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.830 16:32:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:30.830 16:32:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:30.830 16:32:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.830 16:32:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:30.830 16:32:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:30.830 16:32:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:30.830 16:32:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:30.830 16:32:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:30.830 16:32:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:30.830 16:32:15 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:31.087 16:32:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:31.701 [2024-11-20 16:32:16.446518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:31.701 [2024-11-20 16:32:16.526709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.701 [2024-11-20 16:32:16.526881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.958 [2024-11-20 16:32:16.627084] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:31.958 [2024-11-20 16:32:16.627133] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:34.487 spdk_app_start Round 1 00:05:34.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:34.487 16:32:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:34.487 16:32:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:34.487 16:32:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58416 /var/tmp/spdk-nbd.sock 00:05:34.487 16:32:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58416 ']' 00:05:34.487 16:32:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:34.487 16:32:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.487 16:32:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:34.487 16:32:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.487 16:32:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:34.487 16:32:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.487 16:32:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:34.487 16:32:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:34.487 Malloc0 00:05:34.487 16:32:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:34.745 Malloc1 00:05:34.745 16:32:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:34.745 16:32:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.745 16:32:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.745 16:32:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:34.745 16:32:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.745 16:32:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:34.745 16:32:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:34.745 16:32:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.745 16:32:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.745 16:32:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:34.745 16:32:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.745 16:32:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:34.745 16:32:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:34.745 16:32:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:34.745 16:32:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.745 16:32:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:35.004 /dev/nbd0 00:05:35.004 16:32:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:35.004 16:32:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:35.004 16:32:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:35.004 16:32:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:35.004 16:32:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:35.004 16:32:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:35.004 16:32:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:35.004 16:32:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:35.004 16:32:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:35.004 16:32:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:35.004 16:32:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.004 1+0 records in 00:05:35.004 1+0 records out 00:05:35.004 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345653 s, 11.9 MB/s 00:05:35.004 16:32:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:35.004 16:32:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:35.004 16:32:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:35.004 16:32:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:35.004 16:32:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:35.004 16:32:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.004 16:32:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.004 16:32:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:35.263 /dev/nbd1 00:05:35.263 16:32:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:35.263 16:32:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:35.263 16:32:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:35.263 16:32:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:35.263 16:32:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:35.263 16:32:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:35.263 16:32:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:35.263 16:32:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:35.263 16:32:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:35.263 16:32:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:35.263 16:32:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.263 1+0 records in 00:05:35.263 1+0 records out 00:05:35.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260124 s, 15.7 MB/s 00:05:35.263 16:32:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:35.263 16:32:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:35.263 16:32:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:35.263 16:32:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:35.263 16:32:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:35.263 16:32:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.263 16:32:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.263 16:32:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.263 16:32:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.263 16:32:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:35.521 { 00:05:35.521 "nbd_device": "/dev/nbd0", 00:05:35.521 "bdev_name": "Malloc0" 00:05:35.521 }, 00:05:35.521 { 00:05:35.521 "nbd_device": "/dev/nbd1", 00:05:35.521 "bdev_name": "Malloc1" 00:05:35.521 } 00:05:35.521 ]' 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:35.521 { 00:05:35.521 "nbd_device": "/dev/nbd0", 00:05:35.521 "bdev_name": "Malloc0" 00:05:35.521 }, 00:05:35.521 { 00:05:35.521 "nbd_device": "/dev/nbd1", 00:05:35.521 "bdev_name": "Malloc1" 00:05:35.521 } 00:05:35.521 ]' 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:35.521 /dev/nbd1' 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:35.521 /dev/nbd1' 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:35.521 256+0 records in 00:05:35.521 256+0 records out 00:05:35.521 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00617788 s, 170 MB/s 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:35.521 256+0 records in 00:05:35.521 256+0 records out 00:05:35.521 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0154733 s, 67.8 MB/s 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:35.521 256+0 records in 00:05:35.521 256+0 records out 00:05:35.521 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0182559 s, 57.4 MB/s 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.521 16:32:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:35.780 16:32:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:35.780 16:32:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:35.780 16:32:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:35.780 16:32:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:35.780 16:32:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:35.780 16:32:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:35.780 16:32:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:35.780 16:32:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.780 16:32:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.780 16:32:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:36.085 16:32:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:36.085 16:32:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:36.085 16:32:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:36.085 16:32:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.085 16:32:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.085 16:32:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:36.085 16:32:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.085 16:32:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.085 16:32:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.085 16:32:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.085 16:32:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.365 16:32:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:36.365 16:32:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:36.365 16:32:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.365 16:32:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:36.365 16:32:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.365 16:32:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:36.366 16:32:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:36.366 16:32:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:36.366 16:32:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:36.366 16:32:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:36.366 16:32:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:36.366 16:32:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:36.366 16:32:21 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:36.624 16:32:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:37.190 [2024-11-20 16:32:21.905828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.190 [2024-11-20 16:32:21.986373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.190 [2024-11-20 16:32:21.986424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.447 [2024-11-20 16:32:22.083268] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:37.447 [2024-11-20 16:32:22.083331] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:39.974 spdk_app_start Round 2 00:05:39.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:39.974 16:32:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:39.974 16:32:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:39.974 16:32:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58416 /var/tmp/spdk-nbd.sock 00:05:39.974 16:32:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58416 ']' 00:05:39.974 16:32:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:39.974 16:32:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.974 16:32:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:39.974 16:32:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.974 16:32:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:39.974 16:32:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.974 16:32:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:39.974 16:32:24 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.974 Malloc0 00:05:39.974 16:32:24 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.231 Malloc1 00:05:40.231 16:32:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.231 16:32:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.231 16:32:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.231 16:32:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.231 16:32:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.231 16:32:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.231 16:32:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.231 16:32:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.231 16:32:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.231 16:32:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.231 16:32:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.231 16:32:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.231 16:32:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:40.231 16:32:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.231 16:32:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.231 16:32:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:40.487 /dev/nbd0 00:05:40.488 16:32:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:40.488 16:32:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:40.488 16:32:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:40.488 16:32:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:40.488 16:32:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:40.488 16:32:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:40.488 16:32:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:40.488 16:32:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:40.488 16:32:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:40.488 16:32:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:40.488 16:32:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.488 1+0 records in 00:05:40.488 1+0 records out 00:05:40.488 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240757 s, 17.0 MB/s 00:05:40.488 16:32:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.488 16:32:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:40.488 16:32:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.488 16:32:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:40.488 16:32:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:40.488 16:32:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.488 16:32:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.488 16:32:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:40.744 /dev/nbd1 00:05:40.744 16:32:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:40.744 16:32:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:40.744 16:32:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:40.744 16:32:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:40.744 16:32:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:40.744 16:32:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:40.744 16:32:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:40.744 16:32:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:40.744 16:32:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:40.744 16:32:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:40.744 16:32:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.744 1+0 records in 00:05:40.744 1+0 records out 00:05:40.744 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218135 s, 18.8 MB/s 00:05:40.744 16:32:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.744 16:32:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:40.744 16:32:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.744 16:32:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:40.744 16:32:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:40.744 16:32:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.744 16:32:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.744 16:32:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.744 16:32:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.744 16:32:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:41.001 { 00:05:41.001 "nbd_device": "/dev/nbd0", 00:05:41.001 "bdev_name": "Malloc0" 00:05:41.001 }, 00:05:41.001 { 00:05:41.001 "nbd_device": "/dev/nbd1", 00:05:41.001 "bdev_name": "Malloc1" 00:05:41.001 } 00:05:41.001 ]' 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:41.001 { 00:05:41.001 "nbd_device": "/dev/nbd0", 00:05:41.001 "bdev_name": "Malloc0" 00:05:41.001 }, 00:05:41.001 { 00:05:41.001 "nbd_device": "/dev/nbd1", 00:05:41.001 "bdev_name": "Malloc1" 00:05:41.001 } 00:05:41.001 ]' 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:41.001 /dev/nbd1' 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:41.001 /dev/nbd1' 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:41.001 256+0 records in 00:05:41.001 256+0 records out 00:05:41.001 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00627444 s, 167 MB/s 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:41.001 256+0 records in 00:05:41.001 256+0 records out 00:05:41.001 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0191804 s, 54.7 MB/s 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:41.001 256+0 records in 00:05:41.001 256+0 records out 00:05:41.001 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159444 s, 65.8 MB/s 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.001 16:32:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:41.257 16:32:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:41.257 16:32:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:41.257 16:32:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:41.257 16:32:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.257 16:32:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.257 16:32:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:41.257 16:32:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.257 16:32:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.257 16:32:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.257 16:32:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:41.514 16:32:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:41.514 16:32:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:41.514 16:32:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:41.514 16:32:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.514 16:32:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.514 16:32:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:41.514 16:32:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.514 16:32:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.514 16:32:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.514 16:32:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.514 16:32:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.771 16:32:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:41.771 16:32:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:41.771 16:32:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.771 16:32:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:41.771 16:32:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:41.771 16:32:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.771 16:32:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:41.771 16:32:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:41.771 16:32:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:41.771 16:32:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:41.771 16:32:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:41.771 16:32:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:41.771 16:32:26 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:42.028 16:32:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:42.594 [2024-11-20 16:32:27.328682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.594 [2024-11-20 16:32:27.410138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.594 [2024-11-20 16:32:27.410262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.871 [2024-11-20 16:32:27.509926] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:42.871 [2024-11-20 16:32:27.510001] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:45.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.408 16:32:29 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58416 /var/tmp/spdk-nbd.sock 00:05:45.408 16:32:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58416 ']' 00:05:45.408 16:32:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.408 16:32:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.408 16:32:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.408 16:32:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.408 16:32:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.408 16:32:29 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.408 16:32:29 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:45.408 16:32:29 event.app_repeat -- event/event.sh@39 -- # killprocess 58416 00:05:45.408 16:32:29 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58416 ']' 00:05:45.408 16:32:29 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58416 00:05:45.408 16:32:29 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:45.408 16:32:29 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.408 16:32:29 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58416 00:05:45.408 killing process with pid 58416 00:05:45.408 16:32:30 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.408 16:32:30 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.408 16:32:30 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58416' 00:05:45.408 16:32:30 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58416 00:05:45.408 16:32:30 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58416 00:05:45.666 spdk_app_start is called in Round 0. 00:05:45.666 Shutdown signal received, stop current app iteration 00:05:45.666 Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 reinitialization... 00:05:45.666 spdk_app_start is called in Round 1. 00:05:45.666 Shutdown signal received, stop current app iteration 00:05:45.666 Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 reinitialization... 00:05:45.666 spdk_app_start is called in Round 2. 00:05:45.666 Shutdown signal received, stop current app iteration 00:05:45.666 Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 reinitialization... 00:05:45.666 spdk_app_start is called in Round 3. 00:05:45.666 Shutdown signal received, stop current app iteration 00:05:45.666 16:32:30 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:45.666 16:32:30 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:45.666 00:05:45.666 real 0m17.763s 00:05:45.666 user 0m38.936s 00:05:45.666 sys 0m2.112s 00:05:45.666 16:32:30 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.666 16:32:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.666 ************************************ 00:05:45.666 END TEST app_repeat 00:05:45.666 ************************************ 00:05:45.923 16:32:30 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:45.923 16:32:30 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:45.923 16:32:30 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.923 16:32:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.923 16:32:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.923 ************************************ 00:05:45.923 START TEST cpu_locks 00:05:45.923 ************************************ 00:05:45.923 16:32:30 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:45.923 * Looking for test storage... 00:05:45.923 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:45.923 16:32:30 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:45.923 16:32:30 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:45.923 16:32:30 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:45.923 16:32:30 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:45.923 16:32:30 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.923 16:32:30 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.923 16:32:30 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.923 16:32:30 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.923 16:32:30 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.923 16:32:30 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.923 16:32:30 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.923 16:32:30 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.923 16:32:30 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.923 16:32:30 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.923 16:32:30 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.924 16:32:30 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:45.924 16:32:30 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:45.924 16:32:30 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.924 16:32:30 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.924 16:32:30 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:45.924 16:32:30 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:45.924 16:32:30 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.924 16:32:30 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:45.924 16:32:30 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.924 16:32:30 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:45.924 16:32:30 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:45.924 16:32:30 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.924 16:32:30 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:45.924 16:32:30 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.924 16:32:30 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.924 16:32:30 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.924 16:32:30 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:45.924 16:32:30 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.924 16:32:30 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:45.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.924 --rc genhtml_branch_coverage=1 00:05:45.924 --rc genhtml_function_coverage=1 00:05:45.924 --rc genhtml_legend=1 00:05:45.924 --rc geninfo_all_blocks=1 00:05:45.924 --rc geninfo_unexecuted_blocks=1 00:05:45.924 00:05:45.924 ' 00:05:45.924 16:32:30 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:45.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.924 --rc genhtml_branch_coverage=1 00:05:45.924 --rc genhtml_function_coverage=1 00:05:45.924 --rc genhtml_legend=1 00:05:45.924 --rc geninfo_all_blocks=1 00:05:45.924 --rc geninfo_unexecuted_blocks=1 00:05:45.924 00:05:45.924 ' 00:05:45.924 16:32:30 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:45.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.924 --rc genhtml_branch_coverage=1 00:05:45.924 --rc genhtml_function_coverage=1 00:05:45.924 --rc genhtml_legend=1 00:05:45.924 --rc geninfo_all_blocks=1 00:05:45.924 --rc geninfo_unexecuted_blocks=1 00:05:45.924 00:05:45.924 ' 00:05:45.924 16:32:30 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:45.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.924 --rc genhtml_branch_coverage=1 00:05:45.924 --rc genhtml_function_coverage=1 00:05:45.924 --rc genhtml_legend=1 00:05:45.924 --rc geninfo_all_blocks=1 00:05:45.924 --rc geninfo_unexecuted_blocks=1 00:05:45.924 00:05:45.924 ' 00:05:45.924 16:32:30 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:45.924 16:32:30 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:45.924 16:32:30 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:45.924 16:32:30 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:45.924 16:32:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.924 16:32:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.924 16:32:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.924 ************************************ 00:05:45.924 START TEST default_locks 00:05:45.924 ************************************ 00:05:45.924 16:32:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:45.924 16:32:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58841 00:05:45.924 16:32:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58841 00:05:45.924 16:32:30 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58841 ']' 00:05:45.924 16:32:30 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.924 16:32:30 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.924 16:32:30 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.924 16:32:30 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.924 16:32:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.924 16:32:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.924 [2024-11-20 16:32:30.782109] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:05:45.924 [2024-11-20 16:32:30.782229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58841 ] 00:05:46.181 [2024-11-20 16:32:30.942859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.181 [2024-11-20 16:32:31.039116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.748 16:32:31 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.748 16:32:31 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:46.748 16:32:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58841 00:05:46.748 16:32:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:46.748 16:32:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58841 00:05:47.005 16:32:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58841 00:05:47.005 16:32:31 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58841 ']' 00:05:47.005 16:32:31 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58841 00:05:47.005 16:32:31 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:47.005 16:32:31 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.005 16:32:31 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58841 00:05:47.262 16:32:31 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.262 16:32:31 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.262 killing process with pid 58841 00:05:47.262 16:32:31 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58841' 00:05:47.262 16:32:31 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58841 00:05:47.262 16:32:31 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58841 00:05:48.637 16:32:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58841 00:05:48.637 16:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:48.637 16:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58841 00:05:48.637 16:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:48.637 16:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.637 16:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:48.637 16:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.637 16:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58841 00:05:48.637 16:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58841 ']' 00:05:48.637 16:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.637 16:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.637 16:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.637 16:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.637 16:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.637 ERROR: process (pid: 58841) is no longer running 00:05:48.637 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58841) - No such process 00:05:48.637 16:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.637 16:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:48.637 16:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:48.637 16:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:48.637 16:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:48.637 16:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:48.637 16:32:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:48.637 16:32:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:48.637 16:32:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:48.637 16:32:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:48.637 00:05:48.637 real 0m2.696s 00:05:48.637 user 0m2.722s 00:05:48.638 sys 0m0.437s 00:05:48.638 16:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.638 16:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.638 ************************************ 00:05:48.638 END TEST default_locks 00:05:48.638 ************************************ 00:05:48.638 16:32:33 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:48.638 16:32:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.638 16:32:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.638 16:32:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.638 ************************************ 00:05:48.638 START TEST default_locks_via_rpc 00:05:48.638 ************************************ 00:05:48.638 16:32:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:48.638 16:32:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58905 00:05:48.638 16:32:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58905 00:05:48.638 16:32:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58905 ']' 00:05:48.638 16:32:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.638 16:32:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.638 16:32:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.638 16:32:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.638 16:32:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.638 16:32:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.638 [2024-11-20 16:32:33.515518] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:05:48.638 [2024-11-20 16:32:33.515642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58905 ] 00:05:48.895 [2024-11-20 16:32:33.670724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.895 [2024-11-20 16:32:33.771772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.830 16:32:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.830 16:32:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:49.830 16:32:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:49.830 16:32:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.830 16:32:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.830 16:32:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.830 16:32:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:49.830 16:32:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:49.830 16:32:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:49.830 16:32:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:49.830 16:32:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:49.830 16:32:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.830 16:32:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.830 16:32:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.830 16:32:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58905 00:05:49.830 16:32:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58905 00:05:49.830 16:32:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.830 16:32:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58905 00:05:49.830 16:32:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58905 ']' 00:05:49.830 16:32:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58905 00:05:49.830 16:32:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:49.830 16:32:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.830 16:32:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58905 00:05:49.830 16:32:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.830 16:32:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.830 killing process with pid 58905 00:05:49.830 16:32:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58905' 00:05:49.830 16:32:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58905 00:05:49.830 16:32:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58905 00:05:51.203 00:05:51.203 real 0m2.520s 00:05:51.203 user 0m2.549s 00:05:51.203 sys 0m0.426s 00:05:51.203 16:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.203 16:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.203 ************************************ 00:05:51.203 END TEST default_locks_via_rpc 00:05:51.203 ************************************ 00:05:51.203 16:32:35 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:51.203 16:32:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.203 16:32:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.203 16:32:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.203 ************************************ 00:05:51.203 START TEST non_locking_app_on_locked_coremask 00:05:51.203 ************************************ 00:05:51.203 16:32:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:51.203 16:32:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58957 00:05:51.203 16:32:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58957 /var/tmp/spdk.sock 00:05:51.203 16:32:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58957 ']' 00:05:51.203 16:32:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.203 16:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.204 16:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.204 16:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.204 16:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.204 16:32:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.204 [2024-11-20 16:32:36.076420] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:05:51.204 [2024-11-20 16:32:36.076547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58957 ] 00:05:51.462 [2024-11-20 16:32:36.235659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.462 [2024-11-20 16:32:36.333969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.395 16:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.395 16:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:52.395 16:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:52.395 16:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58974 00:05:52.395 16:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58974 /var/tmp/spdk2.sock 00:05:52.395 16:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58974 ']' 00:05:52.395 16:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.395 16:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.395 16:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.395 16:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.395 16:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.395 [2024-11-20 16:32:36.989878] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:05:52.395 [2024-11-20 16:32:36.990024] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58974 ] 00:05:52.395 [2024-11-20 16:32:37.167417] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:52.395 [2024-11-20 16:32:37.167475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.653 [2024-11-20 16:32:37.376960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.029 16:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.029 16:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:54.029 16:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58957 00:05:54.029 16:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58957 00:05:54.029 16:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.029 16:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58957 00:05:54.029 16:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58957 ']' 00:05:54.029 16:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58957 00:05:54.029 16:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:54.029 16:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.029 16:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58957 00:05:54.287 16:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.287 16:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.287 killing process with pid 58957 00:05:54.287 16:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58957' 00:05:54.287 16:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58957 00:05:54.287 16:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58957 00:05:56.845 16:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58974 00:05:56.845 16:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58974 ']' 00:05:56.845 16:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58974 00:05:56.845 16:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:56.845 16:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.845 16:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58974 00:05:56.845 16:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.845 16:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.845 killing process with pid 58974 00:05:56.845 16:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58974' 00:05:56.845 16:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58974 00:05:56.845 16:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58974 00:05:58.220 00:05:58.220 real 0m6.720s 00:05:58.220 user 0m6.961s 00:05:58.220 sys 0m0.847s 00:05:58.220 16:32:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.220 ************************************ 00:05:58.220 END TEST non_locking_app_on_locked_coremask 00:05:58.220 ************************************ 00:05:58.220 16:32:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.220 16:32:42 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:58.220 16:32:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.220 16:32:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.220 16:32:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.220 ************************************ 00:05:58.220 START TEST locking_app_on_unlocked_coremask 00:05:58.220 ************************************ 00:05:58.220 16:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:58.220 16:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59076 00:05:58.220 16:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59076 /var/tmp/spdk.sock 00:05:58.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.220 16:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59076 ']' 00:05:58.220 16:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:58.220 16:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.220 16:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.220 16:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.220 16:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.220 16:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.220 [2024-11-20 16:32:42.846259] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:05:58.220 [2024-11-20 16:32:42.846391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59076 ] 00:05:58.220 [2024-11-20 16:32:43.001790] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:58.220 [2024-11-20 16:32:43.001835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.220 [2024-11-20 16:32:43.086146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.787 16:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.045 16:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:59.045 16:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59092 00:05:59.045 16:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59092 /var/tmp/spdk2.sock 00:05:59.045 16:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59092 ']' 00:05:59.045 16:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.045 16:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.045 16:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.045 16:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.045 16:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.045 16:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:59.045 [2024-11-20 16:32:43.749499] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:05:59.045 [2024-11-20 16:32:43.749620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59092 ] 00:05:59.045 [2024-11-20 16:32:43.917964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.302 [2024-11-20 16:32:44.089446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.254 16:32:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.254 16:32:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:00.254 16:32:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59092 00:06:00.254 16:32:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59092 00:06:00.254 16:32:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.512 16:32:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59076 00:06:00.512 16:32:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59076 ']' 00:06:00.512 16:32:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59076 00:06:00.512 16:32:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:00.512 16:32:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.512 16:32:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59076 00:06:00.512 killing process with pid 59076 00:06:00.512 16:32:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.512 16:32:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.512 16:32:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59076' 00:06:00.512 16:32:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59076 00:06:00.512 16:32:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59076 00:06:03.039 16:32:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59092 00:06:03.039 16:32:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59092 ']' 00:06:03.039 16:32:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59092 00:06:03.039 16:32:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:03.039 16:32:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.039 16:32:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59092 00:06:03.039 killing process with pid 59092 00:06:03.039 16:32:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.039 16:32:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.039 16:32:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59092' 00:06:03.039 16:32:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59092 00:06:03.039 16:32:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59092 00:06:04.410 ************************************ 00:06:04.411 END TEST locking_app_on_unlocked_coremask 00:06:04.411 ************************************ 00:06:04.411 00:06:04.411 real 0m6.335s 00:06:04.411 user 0m6.576s 00:06:04.411 sys 0m0.844s 00:06:04.411 16:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.411 16:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.411 16:32:49 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:04.411 16:32:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.411 16:32:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.411 16:32:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.411 ************************************ 00:06:04.411 START TEST locking_app_on_locked_coremask 00:06:04.411 ************************************ 00:06:04.411 16:32:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:04.411 16:32:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59183 00:06:04.411 16:32:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59183 /var/tmp/spdk.sock 00:06:04.411 16:32:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59183 ']' 00:06:04.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.411 16:32:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.411 16:32:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.411 16:32:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.411 16:32:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.411 16:32:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.411 16:32:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.411 [2024-11-20 16:32:49.222551] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:06:04.411 [2024-11-20 16:32:49.222676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59183 ] 00:06:04.669 [2024-11-20 16:32:49.379431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.669 [2024-11-20 16:32:49.481543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.235 16:32:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.235 16:32:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:05.235 16:32:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59199 00:06:05.235 16:32:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:05.236 16:32:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59199 /var/tmp/spdk2.sock 00:06:05.236 16:32:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:05.236 16:32:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59199 /var/tmp/spdk2.sock 00:06:05.236 16:32:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:05.236 16:32:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.236 16:32:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:05.236 16:32:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.236 16:32:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59199 /var/tmp/spdk2.sock 00:06:05.236 16:32:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59199 ']' 00:06:05.236 16:32:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.236 16:32:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.236 16:32:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.236 16:32:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.236 16:32:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.495 [2024-11-20 16:32:50.146845] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:06:05.495 [2024-11-20 16:32:50.146966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59199 ] 00:06:05.495 [2024-11-20 16:32:50.322425] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59183 has claimed it. 00:06:05.495 [2024-11-20 16:32:50.322494] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:06.060 ERROR: process (pid: 59199) is no longer running 00:06:06.060 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59199) - No such process 00:06:06.061 16:32:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.061 16:32:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:06.061 16:32:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:06.061 16:32:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:06.061 16:32:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:06.061 16:32:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:06.061 16:32:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59183 00:06:06.061 16:32:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59183 00:06:06.061 16:32:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.318 16:32:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59183 00:06:06.318 16:32:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59183 ']' 00:06:06.318 16:32:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59183 00:06:06.318 16:32:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:06.318 16:32:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.318 16:32:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59183 00:06:06.318 killing process with pid 59183 00:06:06.318 16:32:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.318 16:32:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.318 16:32:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59183' 00:06:06.318 16:32:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59183 00:06:06.318 16:32:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59183 00:06:07.689 00:06:07.689 real 0m3.277s 00:06:07.689 user 0m3.499s 00:06:07.689 sys 0m0.531s 00:06:07.689 16:32:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.689 16:32:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.689 ************************************ 00:06:07.689 END TEST locking_app_on_locked_coremask 00:06:07.689 ************************************ 00:06:07.689 16:32:52 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:07.689 16:32:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.689 16:32:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.689 16:32:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.689 ************************************ 00:06:07.689 START TEST locking_overlapped_coremask 00:06:07.689 ************************************ 00:06:07.689 16:32:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:07.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.690 16:32:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59258 00:06:07.690 16:32:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59258 /var/tmp/spdk.sock 00:06:07.690 16:32:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59258 ']' 00:06:07.690 16:32:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.690 16:32:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.690 16:32:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:07.690 16:32:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.690 16:32:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.690 16:32:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.690 [2024-11-20 16:32:52.542825] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:06:07.690 [2024-11-20 16:32:52.542948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59258 ] 00:06:07.948 [2024-11-20 16:32:52.699696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.948 [2024-11-20 16:32:52.789816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.948 [2024-11-20 16:32:52.790275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.948 [2024-11-20 16:32:52.790463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.514 16:32:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.514 16:32:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:08.514 16:32:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59270 00:06:08.514 16:32:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59270 /var/tmp/spdk2.sock 00:06:08.514 16:32:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:08.514 16:32:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59270 /var/tmp/spdk2.sock 00:06:08.514 16:32:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:08.514 16:32:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:08.514 16:32:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.514 16:32:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:08.514 16:32:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.514 16:32:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59270 /var/tmp/spdk2.sock 00:06:08.514 16:32:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59270 ']' 00:06:08.514 16:32:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.515 16:32:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.515 16:32:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.515 16:32:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.515 16:32:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.773 [2024-11-20 16:32:53.453290] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:06:08.774 [2024-11-20 16:32:53.453742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59270 ] 00:06:08.774 [2024-11-20 16:32:53.632046] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59258 has claimed it. 00:06:08.774 [2024-11-20 16:32:53.632109] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:09.339 ERROR: process (pid: 59270) is no longer running 00:06:09.339 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59270) - No such process 00:06:09.339 16:32:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.339 16:32:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:09.339 16:32:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:09.339 16:32:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:09.339 16:32:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:09.339 16:32:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:09.339 16:32:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:09.339 16:32:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:09.339 16:32:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:09.339 16:32:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:09.339 16:32:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59258 00:06:09.339 16:32:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59258 ']' 00:06:09.339 16:32:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59258 00:06:09.339 16:32:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:09.339 16:32:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.339 16:32:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59258 00:06:09.339 16:32:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.339 16:32:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.339 16:32:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59258' 00:06:09.339 killing process with pid 59258 00:06:09.339 16:32:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59258 00:06:09.339 16:32:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59258 00:06:10.715 00:06:10.715 real 0m2.869s 00:06:10.715 user 0m7.795s 00:06:10.715 sys 0m0.448s 00:06:10.715 16:32:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.715 16:32:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.715 ************************************ 00:06:10.715 END TEST locking_overlapped_coremask 00:06:10.715 ************************************ 00:06:10.715 16:32:55 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:10.715 16:32:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.715 16:32:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.715 16:32:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.715 ************************************ 00:06:10.715 START TEST locking_overlapped_coremask_via_rpc 00:06:10.715 ************************************ 00:06:10.715 16:32:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:10.715 16:32:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59323 00:06:10.715 16:32:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59323 /var/tmp/spdk.sock 00:06:10.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.715 16:32:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59323 ']' 00:06:10.715 16:32:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.715 16:32:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.715 16:32:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.715 16:32:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:10.715 16:32:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.715 16:32:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.715 [2024-11-20 16:32:55.470917] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:06:10.715 [2024-11-20 16:32:55.471366] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59323 ] 00:06:10.973 [2024-11-20 16:32:55.653351] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:10.973 [2024-11-20 16:32:55.653431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.973 [2024-11-20 16:32:55.781320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.973 [2024-11-20 16:32:55.781510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.973 [2024-11-20 16:32:55.781531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.538 16:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.538 16:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:11.538 16:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:11.538 16:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59341 00:06:11.538 16:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59341 /var/tmp/spdk2.sock 00:06:11.538 16:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59341 ']' 00:06:11.538 16:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.538 16:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.538 16:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.538 16:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.538 16:32:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.538 [2024-11-20 16:32:56.354635] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:06:11.538 [2024-11-20 16:32:56.355102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59341 ] 00:06:11.797 [2024-11-20 16:32:56.536432] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:11.797 [2024-11-20 16:32:56.536490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:12.055 [2024-11-20 16:32:56.746203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.055 [2024-11-20 16:32:56.746259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.055 [2024-11-20 16:32:56.746289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:12.989 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.989 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:12.989 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:12.989 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.989 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.989 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.989 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:12.989 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:12.989 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:12.989 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:12.989 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.989 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:12.989 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.989 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:12.989 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.989 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.989 [2024-11-20 16:32:57.766530] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59323 has claimed it. 00:06:12.989 request: 00:06:12.989 { 00:06:12.989 "method": "framework_enable_cpumask_locks", 00:06:12.989 "req_id": 1 00:06:12.989 } 00:06:12.989 Got JSON-RPC error response 00:06:12.989 response: 00:06:12.989 { 00:06:12.989 "code": -32603, 00:06:12.989 "message": "Failed to claim CPU core: 2" 00:06:12.989 } 00:06:12.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.989 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:12.989 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:12.989 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:12.989 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:12.989 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:12.989 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59323 /var/tmp/spdk.sock 00:06:12.989 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59323 ']' 00:06:12.989 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.989 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.989 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.989 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.989 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.247 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.247 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:13.247 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59341 /var/tmp/spdk2.sock 00:06:13.247 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59341 ']' 00:06:13.247 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.247 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.247 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.247 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.247 16:32:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.505 16:32:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.505 16:32:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:13.505 16:32:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:13.505 16:32:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:13.505 16:32:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:13.505 16:32:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:13.505 00:06:13.505 real 0m2.801s 00:06:13.505 user 0m1.019s 00:06:13.505 sys 0m0.126s 00:06:13.505 16:32:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.505 16:32:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.505 ************************************ 00:06:13.505 END TEST locking_overlapped_coremask_via_rpc 00:06:13.505 ************************************ 00:06:13.505 16:32:58 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:13.505 16:32:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59323 ]] 00:06:13.505 16:32:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59323 00:06:13.505 16:32:58 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59323 ']' 00:06:13.505 16:32:58 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59323 00:06:13.505 16:32:58 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:13.505 16:32:58 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.505 16:32:58 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59323 00:06:13.505 16:32:58 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.505 16:32:58 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.505 killing process with pid 59323 00:06:13.505 16:32:58 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59323' 00:06:13.505 16:32:58 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59323 00:06:13.505 16:32:58 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59323 00:06:14.877 16:32:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59341 ]] 00:06:14.877 16:32:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59341 00:06:14.877 16:32:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59341 ']' 00:06:14.877 16:32:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59341 00:06:14.877 16:32:59 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:14.877 16:32:59 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.877 16:32:59 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59341 00:06:14.877 killing process with pid 59341 00:06:14.877 16:32:59 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:14.877 16:32:59 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:14.877 16:32:59 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59341' 00:06:14.877 16:32:59 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59341 00:06:14.877 16:32:59 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59341 00:06:15.871 16:33:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:15.871 16:33:00 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:15.871 16:33:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59323 ]] 00:06:15.871 16:33:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59323 00:06:15.871 16:33:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59323 ']' 00:06:15.871 16:33:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59323 00:06:15.871 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59323) - No such process 00:06:15.871 Process with pid 59323 is not found 00:06:15.871 16:33:00 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59323 is not found' 00:06:15.871 16:33:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59341 ]] 00:06:15.871 Process with pid 59341 is not found 00:06:15.871 16:33:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59341 00:06:15.871 16:33:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59341 ']' 00:06:15.871 16:33:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59341 00:06:15.871 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59341) - No such process 00:06:15.871 16:33:00 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59341 is not found' 00:06:15.871 16:33:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:15.871 00:06:15.871 real 0m30.160s 00:06:15.871 user 0m50.861s 00:06:15.871 sys 0m4.446s 00:06:15.871 16:33:00 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.871 ************************************ 00:06:15.871 END TEST cpu_locks 00:06:15.871 ************************************ 00:06:15.871 16:33:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.148 ************************************ 00:06:16.148 END TEST event 00:06:16.148 ************************************ 00:06:16.148 00:06:16.148 real 0m55.543s 00:06:16.148 user 1m42.432s 00:06:16.148 sys 0m7.333s 00:06:16.148 16:33:00 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.148 16:33:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.149 16:33:00 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:16.149 16:33:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.149 16:33:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.149 16:33:00 -- common/autotest_common.sh@10 -- # set +x 00:06:16.149 ************************************ 00:06:16.149 START TEST thread 00:06:16.149 ************************************ 00:06:16.149 16:33:00 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:16.149 * Looking for test storage... 00:06:16.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:16.149 16:33:00 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:16.149 16:33:00 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:16.149 16:33:00 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:16.149 16:33:00 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:16.149 16:33:00 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.149 16:33:00 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.149 16:33:00 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.149 16:33:00 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.149 16:33:00 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.149 16:33:00 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.149 16:33:00 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.149 16:33:00 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.149 16:33:00 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.149 16:33:00 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.149 16:33:00 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.149 16:33:00 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:16.149 16:33:00 thread -- scripts/common.sh@345 -- # : 1 00:06:16.149 16:33:00 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.149 16:33:00 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.149 16:33:00 thread -- scripts/common.sh@365 -- # decimal 1 00:06:16.149 16:33:00 thread -- scripts/common.sh@353 -- # local d=1 00:06:16.149 16:33:00 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.149 16:33:00 thread -- scripts/common.sh@355 -- # echo 1 00:06:16.149 16:33:00 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.149 16:33:00 thread -- scripts/common.sh@366 -- # decimal 2 00:06:16.149 16:33:00 thread -- scripts/common.sh@353 -- # local d=2 00:06:16.149 16:33:00 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.149 16:33:00 thread -- scripts/common.sh@355 -- # echo 2 00:06:16.149 16:33:00 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.149 16:33:00 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.149 16:33:00 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.149 16:33:00 thread -- scripts/common.sh@368 -- # return 0 00:06:16.149 16:33:00 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.149 16:33:00 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:16.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.149 --rc genhtml_branch_coverage=1 00:06:16.149 --rc genhtml_function_coverage=1 00:06:16.149 --rc genhtml_legend=1 00:06:16.149 --rc geninfo_all_blocks=1 00:06:16.149 --rc geninfo_unexecuted_blocks=1 00:06:16.149 00:06:16.149 ' 00:06:16.149 16:33:00 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:16.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.149 --rc genhtml_branch_coverage=1 00:06:16.149 --rc genhtml_function_coverage=1 00:06:16.149 --rc genhtml_legend=1 00:06:16.149 --rc geninfo_all_blocks=1 00:06:16.149 --rc geninfo_unexecuted_blocks=1 00:06:16.149 00:06:16.149 ' 00:06:16.149 16:33:00 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:16.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.149 --rc genhtml_branch_coverage=1 00:06:16.149 --rc genhtml_function_coverage=1 00:06:16.149 --rc genhtml_legend=1 00:06:16.149 --rc geninfo_all_blocks=1 00:06:16.149 --rc geninfo_unexecuted_blocks=1 00:06:16.149 00:06:16.149 ' 00:06:16.149 16:33:00 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:16.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.149 --rc genhtml_branch_coverage=1 00:06:16.149 --rc genhtml_function_coverage=1 00:06:16.149 --rc genhtml_legend=1 00:06:16.149 --rc geninfo_all_blocks=1 00:06:16.149 --rc geninfo_unexecuted_blocks=1 00:06:16.149 00:06:16.149 ' 00:06:16.149 16:33:00 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:16.149 16:33:00 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:16.149 16:33:00 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.149 16:33:00 thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.149 ************************************ 00:06:16.149 START TEST thread_poller_perf 00:06:16.149 ************************************ 00:06:16.149 16:33:00 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:16.149 [2024-11-20 16:33:00.959303] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:06:16.149 [2024-11-20 16:33:00.959652] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59496 ] 00:06:16.408 [2024-11-20 16:33:01.119441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.408 [2024-11-20 16:33:01.204592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.408 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:17.781 [2024-11-20T16:33:02.667Z] ====================================== 00:06:17.781 [2024-11-20T16:33:02.667Z] busy:2609497702 (cyc) 00:06:17.781 [2024-11-20T16:33:02.667Z] total_run_count: 392000 00:06:17.781 [2024-11-20T16:33:02.667Z] tsc_hz: 2600000000 (cyc) 00:06:17.781 [2024-11-20T16:33:02.667Z] ====================================== 00:06:17.781 [2024-11-20T16:33:02.667Z] poller_cost: 6656 (cyc), 2560 (nsec) 00:06:17.781 ************************************ 00:06:17.781 END TEST thread_poller_perf 00:06:17.781 ************************************ 00:06:17.781 00:06:17.781 real 0m1.406s 00:06:17.781 user 0m1.232s 00:06:17.781 sys 0m0.067s 00:06:17.781 16:33:02 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.781 16:33:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:17.781 16:33:02 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:17.781 16:33:02 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:17.781 16:33:02 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.781 16:33:02 thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.781 ************************************ 00:06:17.781 START TEST thread_poller_perf 00:06:17.781 ************************************ 00:06:17.781 16:33:02 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:17.781 [2024-11-20 16:33:02.404091] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:06:17.781 [2024-11-20 16:33:02.404179] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59532 ] 00:06:17.781 [2024-11-20 16:33:02.564603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.781 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:17.781 [2024-11-20 16:33:02.662884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.157 [2024-11-20T16:33:04.043Z] ====================================== 00:06:19.157 [2024-11-20T16:33:04.043Z] busy:2603107208 (cyc) 00:06:19.157 [2024-11-20T16:33:04.043Z] total_run_count: 3815000 00:06:19.157 [2024-11-20T16:33:04.043Z] tsc_hz: 2600000000 (cyc) 00:06:19.157 [2024-11-20T16:33:04.043Z] ====================================== 00:06:19.157 [2024-11-20T16:33:04.043Z] poller_cost: 682 (cyc), 262 (nsec) 00:06:19.157 00:06:19.157 real 0m1.440s 00:06:19.157 user 0m1.278s 00:06:19.157 sys 0m0.053s 00:06:19.157 16:33:03 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.157 ************************************ 00:06:19.157 END TEST thread_poller_perf 00:06:19.157 ************************************ 00:06:19.157 16:33:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:19.157 16:33:03 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:19.157 00:06:19.157 real 0m3.057s 00:06:19.157 user 0m2.615s 00:06:19.157 sys 0m0.232s 00:06:19.157 ************************************ 00:06:19.157 END TEST thread 00:06:19.157 ************************************ 00:06:19.157 16:33:03 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.157 16:33:03 thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.157 16:33:03 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:19.157 16:33:03 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:19.157 16:33:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.157 16:33:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.157 16:33:03 -- common/autotest_common.sh@10 -- # set +x 00:06:19.157 ************************************ 00:06:19.157 START TEST app_cmdline 00:06:19.157 ************************************ 00:06:19.157 16:33:03 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:19.157 * Looking for test storage... 00:06:19.157 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:19.157 16:33:03 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:19.157 16:33:03 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:19.157 16:33:03 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:19.157 16:33:04 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:19.157 16:33:04 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.157 16:33:04 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.157 16:33:04 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.157 16:33:04 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.157 16:33:04 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.157 16:33:04 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.157 16:33:04 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.157 16:33:04 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.157 16:33:04 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.157 16:33:04 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.157 16:33:04 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.157 16:33:04 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:19.157 16:33:04 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:19.157 16:33:04 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.157 16:33:04 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.157 16:33:04 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:19.157 16:33:04 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:19.157 16:33:04 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.157 16:33:04 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:19.157 16:33:04 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.157 16:33:04 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:19.157 16:33:04 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:19.157 16:33:04 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.157 16:33:04 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:19.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.157 16:33:04 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.157 16:33:04 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.157 16:33:04 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.157 16:33:04 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:19.157 16:33:04 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.157 16:33:04 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:19.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.157 --rc genhtml_branch_coverage=1 00:06:19.157 --rc genhtml_function_coverage=1 00:06:19.157 --rc genhtml_legend=1 00:06:19.157 --rc geninfo_all_blocks=1 00:06:19.157 --rc geninfo_unexecuted_blocks=1 00:06:19.157 00:06:19.157 ' 00:06:19.157 16:33:04 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:19.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.157 --rc genhtml_branch_coverage=1 00:06:19.157 --rc genhtml_function_coverage=1 00:06:19.157 --rc genhtml_legend=1 00:06:19.157 --rc geninfo_all_blocks=1 00:06:19.157 --rc geninfo_unexecuted_blocks=1 00:06:19.157 00:06:19.157 ' 00:06:19.157 16:33:04 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:19.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.157 --rc genhtml_branch_coverage=1 00:06:19.157 --rc genhtml_function_coverage=1 00:06:19.158 --rc genhtml_legend=1 00:06:19.158 --rc geninfo_all_blocks=1 00:06:19.158 --rc geninfo_unexecuted_blocks=1 00:06:19.158 00:06:19.158 ' 00:06:19.158 16:33:04 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:19.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.158 --rc genhtml_branch_coverage=1 00:06:19.158 --rc genhtml_function_coverage=1 00:06:19.158 --rc genhtml_legend=1 00:06:19.158 --rc geninfo_all_blocks=1 00:06:19.158 --rc geninfo_unexecuted_blocks=1 00:06:19.158 00:06:19.158 ' 00:06:19.158 16:33:04 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:19.158 16:33:04 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59616 00:06:19.158 16:33:04 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59616 00:06:19.158 16:33:04 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59616 ']' 00:06:19.158 16:33:04 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.158 16:33:04 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.158 16:33:04 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.158 16:33:04 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.158 16:33:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:19.158 16:33:04 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:19.416 [2024-11-20 16:33:04.093973] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:06:19.416 [2024-11-20 16:33:04.094093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59616 ] 00:06:19.416 [2024-11-20 16:33:04.251415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.674 [2024-11-20 16:33:04.349442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.242 16:33:04 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.242 16:33:04 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:20.242 16:33:04 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:20.242 { 00:06:20.242 "version": "SPDK v25.01-pre git sha1 ede20dc4e", 00:06:20.242 "fields": { 00:06:20.242 "major": 25, 00:06:20.242 "minor": 1, 00:06:20.242 "patch": 0, 00:06:20.242 "suffix": "-pre", 00:06:20.242 "commit": "ede20dc4e" 00:06:20.242 } 00:06:20.242 } 00:06:20.242 16:33:05 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:20.242 16:33:05 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:20.242 16:33:05 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:20.242 16:33:05 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:20.242 16:33:05 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:20.242 16:33:05 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.242 16:33:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:20.242 16:33:05 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:20.242 16:33:05 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:20.242 16:33:05 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.242 16:33:05 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:20.242 16:33:05 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:20.242 16:33:05 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:20.242 16:33:05 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:20.242 16:33:05 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:20.242 16:33:05 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:20.242 16:33:05 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.242 16:33:05 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:20.242 16:33:05 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.242 16:33:05 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:20.242 16:33:05 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.242 16:33:05 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:20.242 16:33:05 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:20.242 16:33:05 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:20.501 request: 00:06:20.501 { 00:06:20.501 "method": "env_dpdk_get_mem_stats", 00:06:20.501 "req_id": 1 00:06:20.501 } 00:06:20.501 Got JSON-RPC error response 00:06:20.501 response: 00:06:20.501 { 00:06:20.501 "code": -32601, 00:06:20.501 "message": "Method not found" 00:06:20.501 } 00:06:20.501 16:33:05 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:20.501 16:33:05 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:20.501 16:33:05 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:20.501 16:33:05 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:20.501 16:33:05 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59616 00:06:20.501 16:33:05 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59616 ']' 00:06:20.501 16:33:05 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59616 00:06:20.501 16:33:05 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:20.501 16:33:05 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.501 16:33:05 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59616 00:06:20.501 killing process with pid 59616 00:06:20.501 16:33:05 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.501 16:33:05 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.501 16:33:05 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59616' 00:06:20.501 16:33:05 app_cmdline -- common/autotest_common.sh@973 -- # kill 59616 00:06:20.501 16:33:05 app_cmdline -- common/autotest_common.sh@978 -- # wait 59616 00:06:22.400 00:06:22.400 real 0m2.898s 00:06:22.400 user 0m3.089s 00:06:22.400 sys 0m0.388s 00:06:22.400 16:33:06 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.400 ************************************ 00:06:22.400 16:33:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:22.400 END TEST app_cmdline 00:06:22.400 ************************************ 00:06:22.400 16:33:06 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:22.400 16:33:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.400 16:33:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.400 16:33:06 -- common/autotest_common.sh@10 -- # set +x 00:06:22.400 ************************************ 00:06:22.400 START TEST version 00:06:22.400 ************************************ 00:06:22.400 16:33:06 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:22.400 * Looking for test storage... 00:06:22.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:22.400 16:33:06 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:22.400 16:33:06 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:22.400 16:33:06 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:22.400 16:33:06 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:22.400 16:33:06 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.400 16:33:06 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.400 16:33:06 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.400 16:33:06 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.400 16:33:06 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.400 16:33:06 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.400 16:33:06 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.400 16:33:06 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.400 16:33:06 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.400 16:33:06 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.400 16:33:06 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.400 16:33:06 version -- scripts/common.sh@344 -- # case "$op" in 00:06:22.400 16:33:06 version -- scripts/common.sh@345 -- # : 1 00:06:22.400 16:33:06 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.400 16:33:06 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.400 16:33:06 version -- scripts/common.sh@365 -- # decimal 1 00:06:22.400 16:33:06 version -- scripts/common.sh@353 -- # local d=1 00:06:22.400 16:33:06 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.400 16:33:06 version -- scripts/common.sh@355 -- # echo 1 00:06:22.400 16:33:06 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.400 16:33:06 version -- scripts/common.sh@366 -- # decimal 2 00:06:22.400 16:33:06 version -- scripts/common.sh@353 -- # local d=2 00:06:22.400 16:33:06 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.400 16:33:06 version -- scripts/common.sh@355 -- # echo 2 00:06:22.400 16:33:06 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.400 16:33:06 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.400 16:33:06 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.400 16:33:06 version -- scripts/common.sh@368 -- # return 0 00:06:22.401 16:33:06 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.401 16:33:06 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:22.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.401 --rc genhtml_branch_coverage=1 00:06:22.401 --rc genhtml_function_coverage=1 00:06:22.401 --rc genhtml_legend=1 00:06:22.401 --rc geninfo_all_blocks=1 00:06:22.401 --rc geninfo_unexecuted_blocks=1 00:06:22.401 00:06:22.401 ' 00:06:22.401 16:33:06 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:22.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.401 --rc genhtml_branch_coverage=1 00:06:22.401 --rc genhtml_function_coverage=1 00:06:22.401 --rc genhtml_legend=1 00:06:22.401 --rc geninfo_all_blocks=1 00:06:22.401 --rc geninfo_unexecuted_blocks=1 00:06:22.401 00:06:22.401 ' 00:06:22.401 16:33:06 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:22.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.401 --rc genhtml_branch_coverage=1 00:06:22.401 --rc genhtml_function_coverage=1 00:06:22.401 --rc genhtml_legend=1 00:06:22.401 --rc geninfo_all_blocks=1 00:06:22.401 --rc geninfo_unexecuted_blocks=1 00:06:22.401 00:06:22.401 ' 00:06:22.401 16:33:06 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:22.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.401 --rc genhtml_branch_coverage=1 00:06:22.401 --rc genhtml_function_coverage=1 00:06:22.401 --rc genhtml_legend=1 00:06:22.401 --rc geninfo_all_blocks=1 00:06:22.401 --rc geninfo_unexecuted_blocks=1 00:06:22.401 00:06:22.401 ' 00:06:22.401 16:33:06 version -- app/version.sh@17 -- # get_header_version major 00:06:22.401 16:33:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:22.401 16:33:06 version -- app/version.sh@14 -- # cut -f2 00:06:22.401 16:33:06 version -- app/version.sh@14 -- # tr -d '"' 00:06:22.401 16:33:06 version -- app/version.sh@17 -- # major=25 00:06:22.401 16:33:06 version -- app/version.sh@18 -- # get_header_version minor 00:06:22.401 16:33:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:22.401 16:33:06 version -- app/version.sh@14 -- # tr -d '"' 00:06:22.401 16:33:06 version -- app/version.sh@14 -- # cut -f2 00:06:22.401 16:33:06 version -- app/version.sh@18 -- # minor=1 00:06:22.401 16:33:06 version -- app/version.sh@19 -- # get_header_version patch 00:06:22.401 16:33:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:22.401 16:33:06 version -- app/version.sh@14 -- # cut -f2 00:06:22.401 16:33:06 version -- app/version.sh@14 -- # tr -d '"' 00:06:22.401 16:33:06 version -- app/version.sh@19 -- # patch=0 00:06:22.401 16:33:06 version -- app/version.sh@20 -- # get_header_version suffix 00:06:22.401 16:33:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:22.401 16:33:06 version -- app/version.sh@14 -- # cut -f2 00:06:22.401 16:33:06 version -- app/version.sh@14 -- # tr -d '"' 00:06:22.401 16:33:06 version -- app/version.sh@20 -- # suffix=-pre 00:06:22.401 16:33:06 version -- app/version.sh@22 -- # version=25.1 00:06:22.401 16:33:06 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:22.401 16:33:06 version -- app/version.sh@28 -- # version=25.1rc0 00:06:22.401 16:33:06 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:22.401 16:33:06 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:22.401 16:33:07 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:22.401 16:33:07 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:22.401 ************************************ 00:06:22.401 END TEST version 00:06:22.401 ************************************ 00:06:22.401 00:06:22.401 real 0m0.189s 00:06:22.401 user 0m0.117s 00:06:22.401 sys 0m0.098s 00:06:22.401 16:33:07 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.401 16:33:07 version -- common/autotest_common.sh@10 -- # set +x 00:06:22.401 16:33:07 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:22.401 16:33:07 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:22.401 16:33:07 -- spdk/autotest.sh@194 -- # uname -s 00:06:22.401 16:33:07 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:22.401 16:33:07 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:22.401 16:33:07 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:22.401 16:33:07 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:22.401 16:33:07 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:22.401 16:33:07 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:22.401 16:33:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.401 16:33:07 -- common/autotest_common.sh@10 -- # set +x 00:06:22.401 ************************************ 00:06:22.401 START TEST blockdev_nvme 00:06:22.401 ************************************ 00:06:22.401 16:33:07 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:22.401 * Looking for test storage... 00:06:22.401 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:22.401 16:33:07 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:22.401 16:33:07 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:06:22.401 16:33:07 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:22.401 16:33:07 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:22.401 16:33:07 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.401 16:33:07 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.401 16:33:07 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.401 16:33:07 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.401 16:33:07 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.401 16:33:07 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.401 16:33:07 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.401 16:33:07 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.401 16:33:07 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.401 16:33:07 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.401 16:33:07 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.401 16:33:07 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:22.401 16:33:07 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:22.401 16:33:07 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.401 16:33:07 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.401 16:33:07 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:22.401 16:33:07 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:22.401 16:33:07 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.401 16:33:07 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:22.401 16:33:07 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.401 16:33:07 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:22.401 16:33:07 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:22.401 16:33:07 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.401 16:33:07 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:22.401 16:33:07 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.401 16:33:07 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.401 16:33:07 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.401 16:33:07 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:22.401 16:33:07 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.401 16:33:07 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:22.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.401 --rc genhtml_branch_coverage=1 00:06:22.401 --rc genhtml_function_coverage=1 00:06:22.401 --rc genhtml_legend=1 00:06:22.401 --rc geninfo_all_blocks=1 00:06:22.401 --rc geninfo_unexecuted_blocks=1 00:06:22.401 00:06:22.401 ' 00:06:22.401 16:33:07 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:22.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.401 --rc genhtml_branch_coverage=1 00:06:22.401 --rc genhtml_function_coverage=1 00:06:22.401 --rc genhtml_legend=1 00:06:22.401 --rc geninfo_all_blocks=1 00:06:22.401 --rc geninfo_unexecuted_blocks=1 00:06:22.401 00:06:22.401 ' 00:06:22.401 16:33:07 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:22.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.401 --rc genhtml_branch_coverage=1 00:06:22.401 --rc genhtml_function_coverage=1 00:06:22.401 --rc genhtml_legend=1 00:06:22.401 --rc geninfo_all_blocks=1 00:06:22.401 --rc geninfo_unexecuted_blocks=1 00:06:22.401 00:06:22.401 ' 00:06:22.401 16:33:07 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:22.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.401 --rc genhtml_branch_coverage=1 00:06:22.401 --rc genhtml_function_coverage=1 00:06:22.401 --rc genhtml_legend=1 00:06:22.401 --rc geninfo_all_blocks=1 00:06:22.401 --rc geninfo_unexecuted_blocks=1 00:06:22.401 00:06:22.401 ' 00:06:22.401 16:33:07 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:22.401 16:33:07 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:22.401 16:33:07 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:22.401 16:33:07 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:22.402 16:33:07 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:22.402 16:33:07 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:22.402 16:33:07 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:22.402 16:33:07 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:22.402 16:33:07 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:22.402 16:33:07 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:22.402 16:33:07 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:22.402 16:33:07 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:22.402 16:33:07 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:22.402 16:33:07 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:22.402 16:33:07 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:22.402 16:33:07 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:22.402 16:33:07 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:22.402 16:33:07 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:22.402 16:33:07 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:22.402 16:33:07 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:22.402 16:33:07 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:22.402 16:33:07 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:22.402 16:33:07 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:22.402 16:33:07 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:22.402 16:33:07 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59788 00:06:22.402 16:33:07 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:22.402 16:33:07 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59788 00:06:22.402 16:33:07 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59788 ']' 00:06:22.402 16:33:07 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.402 16:33:07 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.402 16:33:07 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:22.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.402 16:33:07 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.402 16:33:07 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.402 16:33:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:22.402 [2024-11-20 16:33:07.255018] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:06:22.402 [2024-11-20 16:33:07.255224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59788 ] 00:06:22.659 [2024-11-20 16:33:07.406964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.659 [2024-11-20 16:33:07.493227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.224 16:33:08 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.224 16:33:08 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:23.224 16:33:08 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:23.224 16:33:08 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:23.224 16:33:08 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:23.224 16:33:08 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:23.224 16:33:08 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:23.483 16:33:08 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:23.483 16:33:08 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.483 16:33:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:23.742 16:33:08 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.742 16:33:08 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:23.742 16:33:08 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.742 16:33:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:23.742 16:33:08 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.742 16:33:08 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:23.742 16:33:08 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:23.742 16:33:08 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.742 16:33:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:23.742 16:33:08 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.742 16:33:08 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:23.742 16:33:08 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.742 16:33:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:23.742 16:33:08 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.742 16:33:08 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:23.742 16:33:08 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.742 16:33:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:23.742 16:33:08 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.742 16:33:08 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:23.742 16:33:08 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:23.742 16:33:08 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.742 16:33:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:23.742 16:33:08 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:23.742 16:33:08 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.742 16:33:08 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:23.742 16:33:08 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:23.743 16:33:08 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "9fc61722-fbb5-4f1d-9b78-dc2153cafe58"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "9fc61722-fbb5-4f1d-9b78-dc2153cafe58",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "51d07dc6-fef6-42bf-ad22-8a256aceea13"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "51d07dc6-fef6-42bf-ad22-8a256aceea13",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "83914650-81fe-42e3-a00e-cdad0fbec056"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "83914650-81fe-42e3-a00e-cdad0fbec056",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "0f9c4eab-ba80-4904-b252-864a746f955b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0f9c4eab-ba80-4904-b252-864a746f955b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "66f4af1d-c809-4875-a6e4-9d1c77587498"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "66f4af1d-c809-4875-a6e4-9d1c77587498",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "b130fde8-f63d-4a91-8b83-297a8d04680c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b130fde8-f63d-4a91-8b83-297a8d04680c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:23.743 16:33:08 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:23.743 16:33:08 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:23.743 16:33:08 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:23.743 16:33:08 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 59788 00:06:23.743 16:33:08 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59788 ']' 00:06:23.743 16:33:08 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59788 00:06:23.743 16:33:08 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:23.743 16:33:08 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.743 16:33:08 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59788 00:06:23.743 killing process with pid 59788 00:06:23.743 16:33:08 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.743 16:33:08 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.743 16:33:08 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59788' 00:06:23.743 16:33:08 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59788 00:06:23.743 16:33:08 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59788 00:06:25.642 16:33:10 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:25.642 16:33:10 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:25.642 16:33:10 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:25.642 16:33:10 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.642 16:33:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:25.642 ************************************ 00:06:25.642 START TEST bdev_hello_world 00:06:25.642 ************************************ 00:06:25.642 16:33:10 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:25.642 [2024-11-20 16:33:10.164257] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:06:25.642 [2024-11-20 16:33:10.164393] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59872 ] 00:06:25.642 [2024-11-20 16:33:10.323888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.642 [2024-11-20 16:33:10.425445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.206 [2024-11-20 16:33:10.959711] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:26.206 [2024-11-20 16:33:10.959762] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:26.206 [2024-11-20 16:33:10.959781] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:26.206 [2024-11-20 16:33:10.962176] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:26.206 [2024-11-20 16:33:10.962646] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:26.206 [2024-11-20 16:33:10.962781] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:26.207 [2024-11-20 16:33:10.963023] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:26.207 00:06:26.207 [2024-11-20 16:33:10.963058] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:27.138 ************************************ 00:06:27.138 END TEST bdev_hello_world 00:06:27.138 ************************************ 00:06:27.138 00:06:27.138 real 0m1.583s 00:06:27.138 user 0m1.297s 00:06:27.138 sys 0m0.177s 00:06:27.138 16:33:11 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.138 16:33:11 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:27.138 16:33:11 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:27.138 16:33:11 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:27.138 16:33:11 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.138 16:33:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:27.138 ************************************ 00:06:27.138 START TEST bdev_bounds 00:06:27.138 ************************************ 00:06:27.138 16:33:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:27.138 Process bdevio pid: 59908 00:06:27.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.138 16:33:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59908 00:06:27.138 16:33:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:27.138 16:33:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59908' 00:06:27.138 16:33:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59908 00:06:27.138 16:33:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 59908 ']' 00:06:27.138 16:33:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.138 16:33:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:27.138 16:33:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.138 16:33:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.138 16:33:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.138 16:33:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:27.138 [2024-11-20 16:33:11.778943] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:06:27.138 [2024-11-20 16:33:11.779063] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59908 ] 00:06:27.138 [2024-11-20 16:33:11.939899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:27.397 [2024-11-20 16:33:12.044820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.397 [2024-11-20 16:33:12.045081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.397 [2024-11-20 16:33:12.045176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.962 16:33:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.962 16:33:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:27.962 16:33:12 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:27.962 I/O targets: 00:06:27.962 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:27.962 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:27.962 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:27.962 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:27.962 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:27.962 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:27.962 00:06:27.962 00:06:27.962 CUnit - A unit testing framework for C - Version 2.1-3 00:06:27.962 http://cunit.sourceforge.net/ 00:06:27.962 00:06:27.962 00:06:27.962 Suite: bdevio tests on: Nvme3n1 00:06:27.962 Test: blockdev write read block ...passed 00:06:27.962 Test: blockdev write zeroes read block ...passed 00:06:27.962 Test: blockdev write zeroes read no split ...passed 00:06:27.962 Test: blockdev write zeroes read split ...passed 00:06:27.962 Test: blockdev write zeroes read split partial ...passed 00:06:27.962 Test: blockdev reset ...[2024-11-20 16:33:12.759962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:27.962 [2024-11-20 16:33:12.762719] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:27.962 passed 00:06:27.962 Test: blockdev write read 8 blocks ...passed 00:06:27.962 Test: blockdev write read size > 128k ...passed 00:06:27.962 Test: blockdev write read invalid size ...passed 00:06:27.962 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:27.962 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:27.962 Test: blockdev write read max offset ...passed 00:06:27.962 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:27.962 Test: blockdev writev readv 8 blocks ...passed 00:06:27.962 Test: blockdev writev readv 30 x 1block ...passed 00:06:27.962 Test: blockdev writev readv block ...passed 00:06:27.962 Test: blockdev writev readv size > 128k ...passed 00:06:27.962 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:27.962 Test: blockdev comparev and writev ...[2024-11-20 16:33:12.774085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:06:27.962 Test: blockdev nvme passthru rw ...passed 00:06:27.962 Test: blockdev nvme passthru vendor specific ...SGL DATA BLOCK ADDRESS 0x2bd20a000 len:0x1000 00:06:27.962 [2024-11-20 16:33:12.774222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:27.963 [2024-11-20 16:33:12.774738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:27.963 [2024-11-20 16:33:12.774766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:27.963 passed 00:06:27.963 Test: blockdev nvme admin passthru ...passed 00:06:27.963 Test: blockdev copy ...passed 00:06:27.963 Suite: bdevio tests on: Nvme2n3 00:06:27.963 Test: blockdev write read block ...passed 00:06:27.963 Test: blockdev write zeroes read block ...passed 00:06:27.963 Test: blockdev write zeroes read no split ...passed 00:06:27.963 Test: blockdev write zeroes read split ...passed 00:06:27.963 Test: blockdev write zeroes read split partial ...passed 00:06:27.963 Test: blockdev reset ...[2024-11-20 16:33:12.825507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:27.963 [2024-11-20 16:33:12.828422] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:27.963 passed 00:06:27.963 Test: blockdev write read 8 blocks ...passed 00:06:27.963 Test: blockdev write read size > 128k ...passed 00:06:27.963 Test: blockdev write read invalid size ...passed 00:06:27.963 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:27.963 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:27.963 Test: blockdev write read max offset ...passed 00:06:27.963 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:27.963 Test: blockdev writev readv 8 blocks ...passed 00:06:27.963 Test: blockdev writev readv 30 x 1block ...passed 00:06:27.963 Test: blockdev writev readv block ...passed 00:06:27.963 Test: blockdev writev readv size > 128k ...passed 00:06:27.963 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:27.963 Test: blockdev comparev and writev ...[2024-11-20 16:33:12.834348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x299206000 len:0x1000 00:06:27.963 [2024-11-20 16:33:12.834402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:27.963 passed 00:06:27.963 Test: blockdev nvme passthru rw ...passed 00:06:27.963 Test: blockdev nvme passthru vendor specific ...[2024-11-20 16:33:12.834880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:06:27.963 Test: blockdev nvme admin passthru ...passed 00:06:27.963 Test: blockdev copy ... cid:190 PRP1 0x0 PRP2 0x0 00:06:27.963 [2024-11-20 16:33:12.834939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:27.963 passed 00:06:27.963 Suite: bdevio tests on: Nvme2n2 00:06:27.963 Test: blockdev write read block ...passed 00:06:27.963 Test: blockdev write zeroes read block ...passed 00:06:27.963 Test: blockdev write zeroes read no split ...passed 00:06:28.229 Test: blockdev write zeroes read split ...passed 00:06:28.229 Test: blockdev write zeroes read split partial ...passed 00:06:28.229 Test: blockdev reset ...[2024-11-20 16:33:12.879551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:28.229 [2024-11-20 16:33:12.882477] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:28.229 passed 00:06:28.229 Test: blockdev write read 8 blocks ...passed 00:06:28.229 Test: blockdev write read size > 128k ...passed 00:06:28.229 Test: blockdev write read invalid size ...passed 00:06:28.229 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:28.229 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:28.229 Test: blockdev write read max offset ...passed 00:06:28.229 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:28.229 Test: blockdev writev readv 8 blocks ...passed 00:06:28.229 Test: blockdev writev readv 30 x 1block ...passed 00:06:28.229 Test: blockdev writev readv block ...passed 00:06:28.229 Test: blockdev writev readv size > 128k ...passed 00:06:28.229 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:28.229 Test: blockdev comparev and writev ...[2024-11-20 16:33:12.889993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:06:28.229 Test: blockdev nvme passthru rw ...passed 00:06:28.229 Test: blockdev nvme passthru vendor specific ...SGL DATA BLOCK ADDRESS 0x2ca23c000 len:0x1000 00:06:28.229 [2024-11-20 16:33:12.890111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:28.229 [2024-11-20 16:33:12.890743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:28.229 [2024-11-20 16:33:12.890767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:28.229 passed 00:06:28.229 Test: blockdev nvme admin passthru ...passed 00:06:28.229 Test: blockdev copy ...passed 00:06:28.229 Suite: bdevio tests on: Nvme2n1 00:06:28.229 Test: blockdev write read block ...passed 00:06:28.230 Test: blockdev write zeroes read block ...passed 00:06:28.230 Test: blockdev write zeroes read no split ...passed 00:06:28.230 Test: blockdev write zeroes read split ...passed 00:06:28.230 Test: blockdev write zeroes read split partial ...passed 00:06:28.230 Test: blockdev reset ...[2024-11-20 16:33:12.946474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:28.230 [2024-11-20 16:33:12.949266] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:28.230 passed 00:06:28.230 Test: blockdev write read 8 blocks ...passed 00:06:28.230 Test: blockdev write read size > 128k ...passed 00:06:28.230 Test: blockdev write read invalid size ...passed 00:06:28.230 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:28.230 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:28.230 Test: blockdev write read max offset ...passed 00:06:28.230 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:28.230 Test: blockdev writev readv 8 blocks ...passed 00:06:28.230 Test: blockdev writev readv 30 x 1block ...passed 00:06:28.230 Test: blockdev writev readv block ...passed 00:06:28.230 Test: blockdev writev readv size > 128k ...passed 00:06:28.230 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:28.230 Test: blockdev comparev and writev ...[2024-11-20 16:33:12.955784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ca238000 len:0x1000 00:06:28.230 [2024-11-20 16:33:12.955823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:28.230 passed 00:06:28.230 Test: blockdev nvme passthru rw ...passed 00:06:28.230 Test: blockdev nvme passthru vendor specific ...passed 00:06:28.230 Test: blockdev nvme admin passthru ...[2024-11-20 16:33:12.956370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:28.230 [2024-11-20 16:33:12.956407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:28.230 passed 00:06:28.230 Test: blockdev copy ...passed 00:06:28.230 Suite: bdevio tests on: Nvme1n1 00:06:28.230 Test: blockdev write read block ...passed 00:06:28.230 Test: blockdev write zeroes read block ...passed 00:06:28.230 Test: blockdev write zeroes read no split ...passed 00:06:28.230 Test: blockdev write zeroes read split ...passed 00:06:28.231 Test: blockdev write zeroes read split partial ...passed 00:06:28.231 Test: blockdev reset ...[2024-11-20 16:33:13.004271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:28.231 [2024-11-20 16:33:13.006779] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spasseduccessful. 00:06:28.231 00:06:28.231 Test: blockdev write read 8 blocks ...passed 00:06:28.231 Test: blockdev write read size > 128k ...passed 00:06:28.231 Test: blockdev write read invalid size ...passed 00:06:28.231 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:28.231 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:28.231 Test: blockdev write read max offset ...passed 00:06:28.231 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:28.231 Test: blockdev writev readv 8 blocks ...passed 00:06:28.231 Test: blockdev writev readv 30 x 1block ...passed 00:06:28.231 Test: blockdev writev readv block ...passed 00:06:28.231 Test: blockdev writev readv size > 128k ...passed 00:06:28.231 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:28.231 Test: blockdev comparev and writev ...[2024-11-20 16:33:13.015576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:06:28.231 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2ca234000 len:0x1000 00:06:28.231 [2024-11-20 16:33:13.015932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:28.231 passed 00:06:28.231 Test: blockdev nvme passthru vendor specific ...passed 00:06:28.231 Test: blockdev nvme admin passthru ...[2024-11-20 16:33:13.016837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:28.232 [2024-11-20 16:33:13.016923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:28.232 passed 00:06:28.232 Test: blockdev copy ...passed 00:06:28.232 Suite: bdevio tests on: Nvme0n1 00:06:28.232 Test: blockdev write read block ...passed 00:06:28.232 Test: blockdev write zeroes read block ...passed 00:06:28.232 Test: blockdev write zeroes read no split ...passed 00:06:28.232 Test: blockdev write zeroes read split ...passed 00:06:28.232 Test: blockdev write zeroes read split partial ...passed 00:06:28.232 Test: blockdev reset ...[2024-11-20 16:33:13.073433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:28.232 [2024-11-20 16:33:13.077129] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:28.232 passed 00:06:28.232 Test: blockdev write read 8 blocks ...passed 00:06:28.232 Test: blockdev write read size > 128k ...passed 00:06:28.232 Test: blockdev write read invalid size ...passed 00:06:28.232 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:28.232 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:28.232 Test: blockdev write read max offset ...passed 00:06:28.232 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:28.232 Test: blockdev writev readv 8 blocks ...passed 00:06:28.232 Test: blockdev writev readv 30 x 1block ...passed 00:06:28.232 Test: blockdev writev readv block ...passed 00:06:28.232 Test: blockdev writev readv size > 128k ...passed 00:06:28.232 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:28.232 Test: blockdev comparev and writev ...passed 00:06:28.232 Test: blockdev nvme passthru rw ...[2024-11-20 16:33:13.085771] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:28.232 separate metadata which is not supported yet. 00:06:28.232 passed 00:06:28.232 Test: blockdev nvme passthru vendor specific ...passed 00:06:28.232 Test: blockdev nvme admin passthru ...[2024-11-20 16:33:13.086398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:28.232 [2024-11-20 16:33:13.086442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:28.232 passed 00:06:28.232 Test: blockdev copy ...passed 00:06:28.232 00:06:28.233 Run Summary: Type Total Ran Passed Failed Inactive 00:06:28.233 suites 6 6 n/a 0 0 00:06:28.233 tests 138 138 138 0 0 00:06:28.233 asserts 893 893 893 0 n/a 00:06:28.233 00:06:28.233 Elapsed time = 1.007 seconds 00:06:28.233 0 00:06:28.493 16:33:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59908 00:06:28.493 16:33:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 59908 ']' 00:06:28.493 16:33:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 59908 00:06:28.493 16:33:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:28.493 16:33:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.493 16:33:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59908 00:06:28.493 16:33:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.493 16:33:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.493 16:33:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59908' 00:06:28.493 killing process with pid 59908 00:06:28.493 16:33:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 59908 00:06:28.493 16:33:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 59908 00:06:29.060 16:33:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:29.060 00:06:29.060 real 0m2.075s 00:06:29.060 user 0m5.297s 00:06:29.060 sys 0m0.275s 00:06:29.060 16:33:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.060 16:33:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:29.060 ************************************ 00:06:29.060 END TEST bdev_bounds 00:06:29.060 ************************************ 00:06:29.060 16:33:13 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:29.060 16:33:13 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:29.060 16:33:13 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.060 16:33:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:29.060 ************************************ 00:06:29.060 START TEST bdev_nbd 00:06:29.060 ************************************ 00:06:29.060 16:33:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:29.060 16:33:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:29.060 16:33:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:29.060 16:33:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.060 16:33:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:29.060 16:33:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:29.060 16:33:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:29.060 16:33:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:29.060 16:33:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:29.060 16:33:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:29.060 16:33:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:29.060 16:33:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:29.060 16:33:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:29.060 16:33:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:29.060 16:33:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:29.060 16:33:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:29.060 16:33:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=59968 00:06:29.060 16:33:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:29.060 16:33:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 59968 /var/tmp/spdk-nbd.sock 00:06:29.060 16:33:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 59968 ']' 00:06:29.060 16:33:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:29.060 16:33:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:29.060 16:33:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.060 16:33:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:29.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:29.060 16:33:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.060 16:33:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:29.060 [2024-11-20 16:33:13.893804] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:06:29.060 [2024-11-20 16:33:13.894312] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:29.319 [2024-11-20 16:33:14.051422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.319 [2024-11-20 16:33:14.153634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.886 16:33:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.886 16:33:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:29.886 16:33:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:29.886 16:33:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.886 16:33:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:29.886 16:33:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:29.886 16:33:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:29.886 16:33:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.886 16:33:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:29.886 16:33:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:29.886 16:33:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:29.886 16:33:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:29.886 16:33:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:29.886 16:33:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:29.886 16:33:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:30.145 16:33:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:30.145 16:33:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:30.145 16:33:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:30.145 16:33:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:30.145 16:33:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:30.145 16:33:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:30.145 16:33:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:30.145 16:33:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:30.145 16:33:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:30.145 16:33:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:30.145 16:33:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:30.145 16:33:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:30.145 1+0 records in 00:06:30.145 1+0 records out 00:06:30.145 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048552 s, 8.4 MB/s 00:06:30.145 16:33:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:30.145 16:33:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:30.145 16:33:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:30.145 16:33:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:30.145 16:33:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:30.145 16:33:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:30.145 16:33:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:30.145 16:33:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:30.404 16:33:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:30.404 16:33:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:30.404 16:33:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:30.404 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:30.404 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:30.404 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:30.404 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:30.404 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:30.404 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:30.404 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:30.404 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:30.404 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:30.404 1+0 records in 00:06:30.404 1+0 records out 00:06:30.404 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406369 s, 10.1 MB/s 00:06:30.404 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:30.404 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:30.404 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:30.404 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:30.404 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:30.404 16:33:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:30.404 16:33:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:30.404 16:33:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:30.663 16:33:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:30.663 16:33:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:30.663 16:33:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:30.663 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:30.663 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:30.663 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:30.663 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:30.663 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:30.663 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:30.663 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:30.663 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:30.663 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:30.663 1+0 records in 00:06:30.663 1+0 records out 00:06:30.663 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384332 s, 10.7 MB/s 00:06:30.663 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:30.663 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:30.663 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:30.663 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:30.663 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:30.663 16:33:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:30.663 16:33:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:30.663 16:33:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:30.921 16:33:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:30.921 16:33:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:30.921 16:33:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:30.921 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:30.921 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:30.921 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:30.921 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:30.921 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:30.921 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:30.921 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:30.921 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:30.921 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:30.921 1+0 records in 00:06:30.921 1+0 records out 00:06:30.921 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378703 s, 10.8 MB/s 00:06:30.921 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:30.921 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:30.921 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:30.921 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:30.921 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:30.921 16:33:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:30.921 16:33:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:30.921 16:33:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:31.180 16:33:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:31.180 16:33:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:31.180 16:33:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:31.180 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:31.180 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:31.180 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:31.180 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:31.180 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:31.180 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:31.180 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:31.180 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:31.180 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:31.180 1+0 records in 00:06:31.180 1+0 records out 00:06:31.180 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000656587 s, 6.2 MB/s 00:06:31.180 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:31.180 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:31.180 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:31.180 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:31.180 16:33:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:31.180 16:33:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:31.180 16:33:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:31.180 16:33:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:31.438 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:31.438 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:31.438 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:31.438 16:33:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:31.438 16:33:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:31.438 16:33:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:31.438 16:33:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:31.438 16:33:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:31.438 16:33:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:31.438 16:33:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:31.438 16:33:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:31.438 16:33:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:31.438 1+0 records in 00:06:31.438 1+0 records out 00:06:31.438 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044149 s, 9.3 MB/s 00:06:31.438 16:33:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:31.438 16:33:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:31.438 16:33:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:31.438 16:33:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:31.438 16:33:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:31.438 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:31.438 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:31.438 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.696 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:31.696 { 00:06:31.696 "nbd_device": "/dev/nbd0", 00:06:31.696 "bdev_name": "Nvme0n1" 00:06:31.696 }, 00:06:31.696 { 00:06:31.696 "nbd_device": "/dev/nbd1", 00:06:31.696 "bdev_name": "Nvme1n1" 00:06:31.696 }, 00:06:31.696 { 00:06:31.696 "nbd_device": "/dev/nbd2", 00:06:31.696 "bdev_name": "Nvme2n1" 00:06:31.696 }, 00:06:31.696 { 00:06:31.696 "nbd_device": "/dev/nbd3", 00:06:31.696 "bdev_name": "Nvme2n2" 00:06:31.696 }, 00:06:31.696 { 00:06:31.696 "nbd_device": "/dev/nbd4", 00:06:31.696 "bdev_name": "Nvme2n3" 00:06:31.696 }, 00:06:31.696 { 00:06:31.696 "nbd_device": "/dev/nbd5", 00:06:31.696 "bdev_name": "Nvme3n1" 00:06:31.696 } 00:06:31.696 ]' 00:06:31.696 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:31.696 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:31.696 { 00:06:31.696 "nbd_device": "/dev/nbd0", 00:06:31.696 "bdev_name": "Nvme0n1" 00:06:31.696 }, 00:06:31.696 { 00:06:31.696 "nbd_device": "/dev/nbd1", 00:06:31.696 "bdev_name": "Nvme1n1" 00:06:31.696 }, 00:06:31.696 { 00:06:31.696 "nbd_device": "/dev/nbd2", 00:06:31.696 "bdev_name": "Nvme2n1" 00:06:31.696 }, 00:06:31.696 { 00:06:31.696 "nbd_device": "/dev/nbd3", 00:06:31.696 "bdev_name": "Nvme2n2" 00:06:31.696 }, 00:06:31.696 { 00:06:31.696 "nbd_device": "/dev/nbd4", 00:06:31.696 "bdev_name": "Nvme2n3" 00:06:31.696 }, 00:06:31.696 { 00:06:31.696 "nbd_device": "/dev/nbd5", 00:06:31.696 "bdev_name": "Nvme3n1" 00:06:31.696 } 00:06:31.696 ]' 00:06:31.696 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:31.696 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:31.696 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.696 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:31.696 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:31.696 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:31.696 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.696 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:31.696 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:31.697 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:31.697 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:31.697 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.697 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.697 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:31.697 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:31.697 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.697 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.697 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:31.954 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:31.954 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:31.954 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:31.954 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.954 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.954 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:31.954 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:31.954 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.954 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.954 16:33:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:32.213 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:32.213 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:32.213 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:32.213 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.213 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.213 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:32.213 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:32.213 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.213 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.213 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:32.471 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:32.471 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:32.471 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:32.471 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.471 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.471 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:32.471 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:32.471 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.471 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.471 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:32.728 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:32.728 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:32.728 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:32.728 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.728 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.728 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:32.728 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:32.728 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.728 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.728 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:32.987 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:32.987 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:32.987 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:32.987 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.987 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.987 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:32.987 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:32.987 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.987 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.987 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.987 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.987 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:32.987 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.987 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:33.245 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:33.245 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.245 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:33.245 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:33.245 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:33.245 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:33.245 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:33.245 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:33.245 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:33.245 16:33:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:33.245 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.245 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:33.245 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:33.245 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:33.245 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:33.245 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:33.245 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.245 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:33.245 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:33.245 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:33.245 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:33.245 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:33.245 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:33.245 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:33.245 16:33:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:33.502 /dev/nbd0 00:06:33.502 16:33:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:33.502 16:33:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:33.502 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:33.502 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:33.502 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:33.502 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:33.502 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:33.502 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:33.502 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:33.502 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:33.502 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:33.502 1+0 records in 00:06:33.502 1+0 records out 00:06:33.502 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405133 s, 10.1 MB/s 00:06:33.502 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.502 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:33.502 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.502 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:33.502 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:33.502 16:33:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.502 16:33:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:33.502 16:33:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:33.503 /dev/nbd1 00:06:33.761 16:33:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:33.761 16:33:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:33.761 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:33.761 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:33.761 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:33.761 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:33.761 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:33.761 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:33.761 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:33.761 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:33.761 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:33.761 1+0 records in 00:06:33.761 1+0 records out 00:06:33.761 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369103 s, 11.1 MB/s 00:06:33.761 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.761 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:33.761 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.761 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:33.761 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:33.761 16:33:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.761 16:33:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:33.761 16:33:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:33.761 /dev/nbd10 00:06:33.761 16:33:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:33.761 16:33:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:33.761 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:33.761 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:33.761 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:33.761 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:33.761 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:33.761 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:33.761 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:33.761 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:33.761 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:33.761 1+0 records in 00:06:33.761 1+0 records out 00:06:33.761 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399842 s, 10.2 MB/s 00:06:33.761 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.020 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:34.020 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.020 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:34.020 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:34.020 16:33:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.020 16:33:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:34.020 16:33:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:34.020 /dev/nbd11 00:06:34.020 16:33:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:34.020 16:33:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:34.020 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:34.020 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:34.020 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:34.020 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:34.020 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:34.020 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:34.020 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:34.020 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:34.020 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:34.020 1+0 records in 00:06:34.020 1+0 records out 00:06:34.020 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376204 s, 10.9 MB/s 00:06:34.020 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.020 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:34.020 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.020 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:34.020 16:33:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:34.020 16:33:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.020 16:33:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:34.020 16:33:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:34.278 /dev/nbd12 00:06:34.278 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:34.278 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:34.278 16:33:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:34.278 16:33:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:34.278 16:33:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:34.278 16:33:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:34.278 16:33:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:34.278 16:33:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:34.278 16:33:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:34.278 16:33:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:34.278 16:33:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:34.278 1+0 records in 00:06:34.278 1+0 records out 00:06:34.278 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00043973 s, 9.3 MB/s 00:06:34.278 16:33:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.278 16:33:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:34.278 16:33:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.278 16:33:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:34.278 16:33:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:34.278 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.278 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:34.278 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:34.536 /dev/nbd13 00:06:34.536 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:34.536 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:34.536 16:33:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:34.536 16:33:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:34.536 16:33:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:34.536 16:33:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:34.536 16:33:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:34.536 16:33:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:34.536 16:33:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:34.536 16:33:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:34.536 16:33:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:34.536 1+0 records in 00:06:34.536 1+0 records out 00:06:34.536 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412242 s, 9.9 MB/s 00:06:34.536 16:33:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.536 16:33:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:34.536 16:33:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.536 16:33:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:34.536 16:33:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:34.536 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.536 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:34.536 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.536 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.536 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.794 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:34.794 { 00:06:34.794 "nbd_device": "/dev/nbd0", 00:06:34.794 "bdev_name": "Nvme0n1" 00:06:34.794 }, 00:06:34.794 { 00:06:34.794 "nbd_device": "/dev/nbd1", 00:06:34.794 "bdev_name": "Nvme1n1" 00:06:34.794 }, 00:06:34.794 { 00:06:34.794 "nbd_device": "/dev/nbd10", 00:06:34.794 "bdev_name": "Nvme2n1" 00:06:34.794 }, 00:06:34.794 { 00:06:34.794 "nbd_device": "/dev/nbd11", 00:06:34.794 "bdev_name": "Nvme2n2" 00:06:34.794 }, 00:06:34.794 { 00:06:34.794 "nbd_device": "/dev/nbd12", 00:06:34.794 "bdev_name": "Nvme2n3" 00:06:34.794 }, 00:06:34.794 { 00:06:34.794 "nbd_device": "/dev/nbd13", 00:06:34.794 "bdev_name": "Nvme3n1" 00:06:34.794 } 00:06:34.794 ]' 00:06:34.794 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:34.794 { 00:06:34.794 "nbd_device": "/dev/nbd0", 00:06:34.794 "bdev_name": "Nvme0n1" 00:06:34.794 }, 00:06:34.794 { 00:06:34.794 "nbd_device": "/dev/nbd1", 00:06:34.794 "bdev_name": "Nvme1n1" 00:06:34.794 }, 00:06:34.794 { 00:06:34.794 "nbd_device": "/dev/nbd10", 00:06:34.794 "bdev_name": "Nvme2n1" 00:06:34.794 }, 00:06:34.794 { 00:06:34.794 "nbd_device": "/dev/nbd11", 00:06:34.794 "bdev_name": "Nvme2n2" 00:06:34.794 }, 00:06:34.794 { 00:06:34.794 "nbd_device": "/dev/nbd12", 00:06:34.794 "bdev_name": "Nvme2n3" 00:06:34.794 }, 00:06:34.794 { 00:06:34.794 "nbd_device": "/dev/nbd13", 00:06:34.794 "bdev_name": "Nvme3n1" 00:06:34.794 } 00:06:34.794 ]' 00:06:34.794 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.794 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:34.794 /dev/nbd1 00:06:34.794 /dev/nbd10 00:06:34.794 /dev/nbd11 00:06:34.794 /dev/nbd12 00:06:34.794 /dev/nbd13' 00:06:34.794 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.794 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:34.794 /dev/nbd1 00:06:34.794 /dev/nbd10 00:06:34.794 /dev/nbd11 00:06:34.794 /dev/nbd12 00:06:34.794 /dev/nbd13' 00:06:34.794 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:34.794 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:34.794 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:34.794 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:34.794 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:34.794 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:34.794 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:34.794 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:34.794 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:34.794 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:34.794 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:34.794 256+0 records in 00:06:34.794 256+0 records out 00:06:34.794 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00899622 s, 117 MB/s 00:06:34.794 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:34.794 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:35.064 256+0 records in 00:06:35.064 256+0 records out 00:06:35.064 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0631648 s, 16.6 MB/s 00:06:35.064 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:35.064 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:35.064 256+0 records in 00:06:35.064 256+0 records out 00:06:35.064 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0642057 s, 16.3 MB/s 00:06:35.064 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:35.064 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:35.064 256+0 records in 00:06:35.064 256+0 records out 00:06:35.064 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0645876 s, 16.2 MB/s 00:06:35.064 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:35.064 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:35.064 256+0 records in 00:06:35.064 256+0 records out 00:06:35.064 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0644741 s, 16.3 MB/s 00:06:35.064 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:35.064 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:35.322 256+0 records in 00:06:35.322 256+0 records out 00:06:35.322 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0625923 s, 16.8 MB/s 00:06:35.323 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:35.323 16:33:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:35.323 256+0 records in 00:06:35.323 256+0 records out 00:06:35.323 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0721896 s, 14.5 MB/s 00:06:35.323 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:35.323 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:35.323 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:35.323 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:35.323 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:35.323 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:35.323 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:35.323 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:35.323 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:35.323 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:35.323 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:35.323 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:35.323 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:35.323 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:35.323 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:35.323 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:35.323 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:35.323 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:35.323 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:35.323 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:35.323 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:35.323 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.323 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:35.323 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:35.323 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:35.323 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.323 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:35.581 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:35.581 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:35.581 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:35.581 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.581 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.581 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:35.581 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:35.581 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.581 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.581 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:35.581 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:35.581 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:35.581 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:35.581 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.581 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.581 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:35.839 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:35.839 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.839 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.839 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:35.839 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:35.839 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:35.839 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:35.839 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.839 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.839 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:35.839 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:35.839 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.839 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.839 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:36.096 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:36.096 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:36.096 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:36.096 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.096 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.096 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:36.096 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:36.096 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.096 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.096 16:33:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:36.354 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:36.354 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:36.354 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:36.354 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.354 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.354 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:36.354 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:36.354 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.354 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.354 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:36.612 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:36.612 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:36.613 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:36.613 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.613 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.613 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:36.613 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:36.613 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.613 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.613 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.613 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.871 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:36.871 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.871 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:36.871 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:36.871 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:36.871 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.871 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:36.871 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:36.871 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:36.871 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:36.871 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:36.871 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:36.871 16:33:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:36.871 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.871 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:36.871 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:36.871 malloc_lvol_verify 00:06:37.129 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:37.129 d4793716-abe6-493d-bdd5-358e41e53167 00:06:37.129 16:33:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:37.387 574fb4f2-d9fa-478f-8827-04ae72548f97 00:06:37.387 16:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:37.645 /dev/nbd0 00:06:37.645 16:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:37.645 16:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:37.645 16:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:37.645 16:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:37.645 16:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:37.645 mke2fs 1.47.0 (5-Feb-2023) 00:06:37.645 Discarding device blocks: 0/4096 done 00:06:37.645 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:37.645 00:06:37.645 Allocating group tables: 0/1 done 00:06:37.645 Writing inode tables: 0/1 done 00:06:37.645 Creating journal (1024 blocks): done 00:06:37.645 Writing superblocks and filesystem accounting information: 0/1 done 00:06:37.645 00:06:37.645 16:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:37.645 16:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.645 16:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:37.645 16:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:37.645 16:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:37.645 16:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.645 16:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:37.902 16:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:37.902 16:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:37.902 16:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:37.903 16:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.903 16:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.903 16:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:37.903 16:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:37.903 16:33:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.903 16:33:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 59968 00:06:37.903 16:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 59968 ']' 00:06:37.903 16:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 59968 00:06:37.903 16:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:37.903 16:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.903 16:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59968 00:06:37.903 16:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.903 killing process with pid 59968 00:06:37.903 16:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.903 16:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59968' 00:06:37.903 16:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 59968 00:06:37.903 16:33:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 59968 00:06:38.865 16:33:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:38.865 00:06:38.865 real 0m9.581s 00:06:38.865 user 0m13.840s 00:06:38.865 sys 0m2.988s 00:06:38.865 16:33:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.865 16:33:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:38.865 ************************************ 00:06:38.865 END TEST bdev_nbd 00:06:38.865 ************************************ 00:06:38.865 16:33:23 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:38.865 16:33:23 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:38.865 skipping fio tests on NVMe due to multi-ns failures. 00:06:38.865 16:33:23 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:38.865 16:33:23 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:38.865 16:33:23 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:38.865 16:33:23 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:38.865 16:33:23 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.865 16:33:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:38.865 ************************************ 00:06:38.865 START TEST bdev_verify 00:06:38.865 ************************************ 00:06:38.865 16:33:23 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:38.865 [2024-11-20 16:33:23.516921] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:06:38.865 [2024-11-20 16:33:23.517036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60336 ] 00:06:38.865 [2024-11-20 16:33:23.676414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:39.122 [2024-11-20 16:33:23.781215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.122 [2024-11-20 16:33:23.781324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.689 Running I/O for 5 seconds... 00:06:41.995 23488.00 IOPS, 91.75 MiB/s [2024-11-20T16:33:27.814Z] 23552.00 IOPS, 92.00 MiB/s [2024-11-20T16:33:28.747Z] 20860.67 IOPS, 81.49 MiB/s [2024-11-20T16:33:29.749Z] 22000.75 IOPS, 85.94 MiB/s [2024-11-20T16:33:29.749Z] 22249.20 IOPS, 86.91 MiB/s 00:06:44.863 Latency(us) 00:06:44.863 [2024-11-20T16:33:29.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:44.863 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:44.863 Verification LBA range: start 0x0 length 0xbd0bd 00:06:44.863 Nvme0n1 : 5.05 1826.65 7.14 0.00 0.00 69903.30 13409.67 383940.14 00:06:44.863 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:44.863 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:44.863 Nvme0n1 : 5.05 1851.81 7.23 0.00 0.00 68933.84 13107.20 390392.91 00:06:44.863 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:44.863 Verification LBA range: start 0x0 length 0xa0000 00:06:44.863 Nvme1n1 : 5.05 1815.27 7.09 0.00 0.00 70151.94 15829.46 408138.04 00:06:44.863 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:44.863 Verification LBA range: start 0xa0000 length 0xa0000 00:06:44.863 Nvme1n1 : 5.05 1849.82 7.23 0.00 0.00 68788.34 15022.87 387166.52 00:06:44.863 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:44.863 Verification LBA range: start 0x0 length 0x80000 00:06:44.863 Nvme2n1 : 5.05 1815.17 7.09 0.00 0.00 70061.24 8015.56 408138.04 00:06:44.863 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:44.863 Verification LBA range: start 0x80000 length 0x80000 00:06:44.863 Nvme2n1 : 5.05 1849.30 7.22 0.00 0.00 68656.41 16333.59 382326.94 00:06:44.863 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:44.863 Verification LBA range: start 0x0 length 0x80000 00:06:44.863 Nvme2n2 : 5.06 1822.88 7.12 0.00 0.00 69711.88 3730.51 404911.66 00:06:44.863 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:44.863 Verification LBA range: start 0x80000 length 0x80000 00:06:44.863 Nvme2n2 : 5.07 1856.90 7.25 0.00 0.00 68274.41 4486.70 377487.36 00:06:44.863 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:44.863 Verification LBA range: start 0x0 length 0x80000 00:06:44.863 Nvme2n3 : 5.06 1821.87 7.12 0.00 0.00 69614.67 4864.79 401685.27 00:06:44.863 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:44.863 Verification LBA range: start 0x80000 length 0x80000 00:06:44.863 Nvme2n3 : 5.08 1865.99 7.29 0.00 0.00 67888.63 7259.37 375874.17 00:06:44.863 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:44.863 Verification LBA range: start 0x0 length 0x20000 00:06:44.863 Nvme3n1 : 5.07 1829.93 7.15 0.00 0.00 69214.76 8116.38 383940.14 00:06:44.863 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:44.863 Verification LBA range: start 0x20000 length 0x20000 00:06:44.863 Nvme3n1 : 5.08 1865.50 7.29 0.00 0.00 67814.01 7612.26 372647.78 00:06:44.863 [2024-11-20T16:33:29.749Z] =================================================================================================================== 00:06:44.863 [2024-11-20T16:33:29.749Z] Total : 22071.08 86.22 0.00 0.00 69075.52 3730.51 408138.04 00:06:45.796 00:06:45.796 real 0m7.216s 00:06:45.796 user 0m13.537s 00:06:45.796 sys 0m0.207s 00:06:45.796 16:33:30 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.796 16:33:30 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:45.796 ************************************ 00:06:45.796 END TEST bdev_verify 00:06:45.796 ************************************ 00:06:46.053 16:33:30 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:46.053 16:33:30 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:46.053 16:33:30 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.053 16:33:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:46.053 ************************************ 00:06:46.053 START TEST bdev_verify_big_io 00:06:46.053 ************************************ 00:06:46.053 16:33:30 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:46.053 [2024-11-20 16:33:30.781355] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:06:46.053 [2024-11-20 16:33:30.781503] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60434 ] 00:06:46.311 [2024-11-20 16:33:30.941880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:46.311 [2024-11-20 16:33:31.043250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.311 [2024-11-20 16:33:31.043444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.878 Running I/O for 5 seconds... 00:06:49.764 0.00 IOPS, 0.00 MiB/s [2024-11-20T16:33:36.551Z] 862.00 IOPS, 53.88 MiB/s [2024-11-20T16:33:37.925Z] 1275.33 IOPS, 79.71 MiB/s [2024-11-20T16:33:37.925Z] 1518.50 IOPS, 94.91 MiB/s 00:06:53.039 Latency(us) 00:06:53.039 [2024-11-20T16:33:37.925Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:53.039 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:53.039 Verification LBA range: start 0x0 length 0xbd0b 00:06:53.039 Nvme0n1 : 5.69 113.22 7.08 0.00 0.00 1040980.42 15829.46 1264743.98 00:06:53.039 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:53.039 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:53.039 Nvme0n1 : 5.79 110.62 6.91 0.00 0.00 1115511.65 24702.03 1245385.65 00:06:53.039 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:53.039 Verification LBA range: start 0x0 length 0xa000 00:06:53.039 Nvme1n1 : 5.98 123.55 7.72 0.00 0.00 951280.93 78239.90 1045349.61 00:06:53.039 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:53.039 Verification LBA range: start 0xa000 length 0xa000 00:06:53.039 Nvme1n1 : 5.79 110.58 6.91 0.00 0.00 1075431.27 116956.55 1064707.94 00:06:53.039 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:53.039 Verification LBA range: start 0x0 length 0x8000 00:06:53.039 Nvme2n1 : 5.98 124.32 7.77 0.00 0.00 912368.45 78643.20 1000180.18 00:06:53.039 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:53.039 Verification LBA range: start 0x8000 length 0x8000 00:06:53.039 Nvme2n1 : 5.87 112.71 7.04 0.00 0.00 1015447.65 83482.78 1058255.16 00:06:53.039 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:53.039 Verification LBA range: start 0x0 length 0x8000 00:06:53.039 Nvme2n2 : 5.99 128.21 8.01 0.00 0.00 862420.68 113730.17 1032444.06 00:06:53.039 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:53.039 Verification LBA range: start 0x8000 length 0x8000 00:06:53.039 Nvme2n2 : 5.94 118.42 7.40 0.00 0.00 938070.50 68157.44 1084066.26 00:06:53.039 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:53.039 Verification LBA range: start 0x0 length 0x8000 00:06:53.039 Nvme2n3 : 6.07 137.02 8.56 0.00 0.00 783685.68 22282.24 1051802.39 00:06:53.039 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:53.039 Verification LBA range: start 0x8000 length 0x8000 00:06:53.039 Nvme2n3 : 6.05 126.93 7.93 0.00 0.00 846194.48 60494.77 1103424.59 00:06:53.039 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:53.039 Verification LBA range: start 0x0 length 0x2000 00:06:53.039 Nvme3n1 : 6.08 147.30 9.21 0.00 0.00 702865.68 1506.07 1084066.26 00:06:53.039 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:53.039 Verification LBA range: start 0x2000 length 0x2000 00:06:53.039 Nvme3n1 : 6.09 143.09 8.94 0.00 0.00 725755.84 2709.66 1129235.69 00:06:53.039 [2024-11-20T16:33:37.925Z] =================================================================================================================== 00:06:53.039 [2024-11-20T16:33:37.925Z] Total : 1495.98 93.50 0.00 0.00 899547.09 1506.07 1264743.98 00:06:54.414 00:06:54.414 real 0m8.563s 00:06:54.414 user 0m16.191s 00:06:54.414 sys 0m0.223s 00:06:54.414 16:33:39 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.414 16:33:39 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:54.414 ************************************ 00:06:54.414 END TEST bdev_verify_big_io 00:06:54.414 ************************************ 00:06:54.672 16:33:39 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:54.672 16:33:39 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:54.672 16:33:39 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.672 16:33:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.672 ************************************ 00:06:54.672 START TEST bdev_write_zeroes 00:06:54.672 ************************************ 00:06:54.672 16:33:39 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:54.672 [2024-11-20 16:33:39.380498] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:06:54.672 [2024-11-20 16:33:39.380625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60543 ] 00:06:54.672 [2024-11-20 16:33:39.545684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.929 [2024-11-20 16:33:39.647409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.494 Running I/O for 1 seconds... 00:06:56.426 67200.00 IOPS, 262.50 MiB/s 00:06:56.426 Latency(us) 00:06:56.426 [2024-11-20T16:33:41.312Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:56.426 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:56.426 Nvme0n1 : 1.02 11153.84 43.57 0.00 0.00 11452.24 8872.57 21072.34 00:06:56.426 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:56.426 Nvme1n1 : 1.02 11138.77 43.51 0.00 0.00 11452.75 8973.39 21576.47 00:06:56.426 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:56.426 Nvme2n1 : 1.02 11125.98 43.46 0.00 0.00 11422.39 9175.04 19257.50 00:06:56.427 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:56.427 Nvme2n2 : 1.03 11113.02 43.41 0.00 0.00 11410.21 9074.22 19156.68 00:06:56.427 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:56.427 Nvme2n3 : 1.03 11100.19 43.36 0.00 0.00 11401.98 9023.80 18955.03 00:06:56.427 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:56.427 Nvme3n1 : 1.03 11087.37 43.31 0.00 0.00 11379.65 8015.56 20669.05 00:06:56.427 [2024-11-20T16:33:41.313Z] =================================================================================================================== 00:06:56.427 [2024-11-20T16:33:41.313Z] Total : 66719.17 260.62 0.00 0.00 11419.87 8015.56 21576.47 00:06:57.359 00:06:57.359 real 0m2.684s 00:06:57.359 user 0m2.391s 00:06:57.359 sys 0m0.174s 00:06:57.360 16:33:42 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.360 16:33:42 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:57.360 ************************************ 00:06:57.360 END TEST bdev_write_zeroes 00:06:57.360 ************************************ 00:06:57.360 16:33:42 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:57.360 16:33:42 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:57.360 16:33:42 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.360 16:33:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:57.360 ************************************ 00:06:57.360 START TEST bdev_json_nonenclosed 00:06:57.360 ************************************ 00:06:57.360 16:33:42 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:57.360 [2024-11-20 16:33:42.114018] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:06:57.360 [2024-11-20 16:33:42.114125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60596 ] 00:06:57.618 [2024-11-20 16:33:42.275525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.618 [2024-11-20 16:33:42.383132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.618 [2024-11-20 16:33:42.383216] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:57.618 [2024-11-20 16:33:42.383233] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:57.618 [2024-11-20 16:33:42.383242] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:57.876 00:06:57.876 real 0m0.513s 00:06:57.876 user 0m0.312s 00:06:57.876 sys 0m0.096s 00:06:57.876 16:33:42 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.876 16:33:42 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:57.876 ************************************ 00:06:57.876 END TEST bdev_json_nonenclosed 00:06:57.876 ************************************ 00:06:57.876 16:33:42 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:57.876 16:33:42 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:57.876 16:33:42 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.876 16:33:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:57.876 ************************************ 00:06:57.876 START TEST bdev_json_nonarray 00:06:57.876 ************************************ 00:06:57.876 16:33:42 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:57.876 [2024-11-20 16:33:42.666682] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:06:57.876 [2024-11-20 16:33:42.666790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60616 ] 00:06:58.135 [2024-11-20 16:33:42.823582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.135 [2024-11-20 16:33:42.921909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.135 [2024-11-20 16:33:42.921994] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:58.135 [2024-11-20 16:33:42.922010] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:58.135 [2024-11-20 16:33:42.922020] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:58.392 00:06:58.392 real 0m0.494s 00:06:58.392 user 0m0.298s 00:06:58.392 sys 0m0.093s 00:06:58.392 16:33:43 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.392 16:33:43 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:58.393 ************************************ 00:06:58.393 END TEST bdev_json_nonarray 00:06:58.393 ************************************ 00:06:58.393 16:33:43 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:06:58.393 16:33:43 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:06:58.393 16:33:43 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:06:58.393 16:33:43 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:58.393 16:33:43 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:06:58.393 16:33:43 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:58.393 16:33:43 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:58.393 16:33:43 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:58.393 16:33:43 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:58.393 16:33:43 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:58.393 16:33:43 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:58.393 00:06:58.393 real 0m36.093s 00:06:58.393 user 0m56.322s 00:06:58.393 sys 0m4.942s 00:06:58.393 16:33:43 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.393 16:33:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:58.393 ************************************ 00:06:58.393 END TEST blockdev_nvme 00:06:58.393 ************************************ 00:06:58.393 16:33:43 -- spdk/autotest.sh@209 -- # uname -s 00:06:58.393 16:33:43 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:58.393 16:33:43 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:58.393 16:33:43 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:58.393 16:33:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.393 16:33:43 -- common/autotest_common.sh@10 -- # set +x 00:06:58.393 ************************************ 00:06:58.393 START TEST blockdev_nvme_gpt 00:06:58.393 ************************************ 00:06:58.393 16:33:43 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:58.393 * Looking for test storage... 00:06:58.393 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:58.393 16:33:43 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:58.393 16:33:43 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:06:58.393 16:33:43 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:58.651 16:33:43 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:58.651 16:33:43 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.651 16:33:43 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.651 16:33:43 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.651 16:33:43 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.651 16:33:43 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.651 16:33:43 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.651 16:33:43 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.651 16:33:43 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.651 16:33:43 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.651 16:33:43 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.651 16:33:43 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.651 16:33:43 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:58.651 16:33:43 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:58.651 16:33:43 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.651 16:33:43 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.651 16:33:43 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:58.651 16:33:43 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:58.651 16:33:43 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.651 16:33:43 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:58.651 16:33:43 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.651 16:33:43 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:58.651 16:33:43 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:58.651 16:33:43 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.651 16:33:43 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:58.651 16:33:43 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.651 16:33:43 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.651 16:33:43 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.651 16:33:43 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:58.651 16:33:43 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.651 16:33:43 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:58.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.651 --rc genhtml_branch_coverage=1 00:06:58.651 --rc genhtml_function_coverage=1 00:06:58.651 --rc genhtml_legend=1 00:06:58.651 --rc geninfo_all_blocks=1 00:06:58.651 --rc geninfo_unexecuted_blocks=1 00:06:58.651 00:06:58.651 ' 00:06:58.651 16:33:43 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:58.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.651 --rc genhtml_branch_coverage=1 00:06:58.651 --rc genhtml_function_coverage=1 00:06:58.651 --rc genhtml_legend=1 00:06:58.651 --rc geninfo_all_blocks=1 00:06:58.651 --rc geninfo_unexecuted_blocks=1 00:06:58.651 00:06:58.651 ' 00:06:58.651 16:33:43 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:58.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.651 --rc genhtml_branch_coverage=1 00:06:58.651 --rc genhtml_function_coverage=1 00:06:58.651 --rc genhtml_legend=1 00:06:58.651 --rc geninfo_all_blocks=1 00:06:58.651 --rc geninfo_unexecuted_blocks=1 00:06:58.651 00:06:58.651 ' 00:06:58.651 16:33:43 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:58.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.651 --rc genhtml_branch_coverage=1 00:06:58.651 --rc genhtml_function_coverage=1 00:06:58.651 --rc genhtml_legend=1 00:06:58.651 --rc geninfo_all_blocks=1 00:06:58.651 --rc geninfo_unexecuted_blocks=1 00:06:58.651 00:06:58.651 ' 00:06:58.651 16:33:43 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:58.651 16:33:43 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:58.651 16:33:43 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:58.651 16:33:43 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:58.651 16:33:43 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:58.651 16:33:43 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:58.651 16:33:43 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:58.651 16:33:43 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:58.651 16:33:43 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:58.652 16:33:43 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:58.652 16:33:43 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:58.652 16:33:43 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:58.652 16:33:43 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:06:58.652 16:33:43 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:58.652 16:33:43 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:58.652 16:33:43 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:06:58.652 16:33:43 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:58.652 16:33:43 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:06:58.652 16:33:43 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:58.652 16:33:43 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:58.652 16:33:43 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:58.652 16:33:43 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:06:58.652 16:33:43 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:06:58.652 16:33:43 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:58.652 16:33:43 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60702 00:06:58.652 16:33:43 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:58.652 16:33:43 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60702 00:06:58.652 16:33:43 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60702 ']' 00:06:58.652 16:33:43 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.652 16:33:43 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.652 16:33:43 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.652 16:33:43 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.652 16:33:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:58.652 16:33:43 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:58.652 [2024-11-20 16:33:43.425321] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:06:58.652 [2024-11-20 16:33:43.425449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60702 ] 00:06:58.997 [2024-11-20 16:33:43.594725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.997 [2024-11-20 16:33:43.693932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.577 16:33:44 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.577 16:33:44 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:59.577 16:33:44 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:59.577 16:33:44 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:06:59.577 16:33:44 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:59.835 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:59.835 Waiting for block devices as requested 00:07:00.094 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:00.094 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:00.094 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:00.094 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:05.358 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:05.358 16:33:49 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:05.358 16:33:49 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:05.358 16:33:49 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:05.358 16:33:49 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:07:05.358 16:33:49 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:05.358 16:33:49 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:07:05.358 16:33:49 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:05.358 16:33:49 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:05.358 16:33:49 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:05.358 16:33:49 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:05.358 16:33:49 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:07:05.358 16:33:49 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:05.358 16:33:49 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:05.358 16:33:49 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:05.358 16:33:49 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:05.358 16:33:49 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:07:05.358 16:33:49 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:05.358 16:33:49 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:05.358 16:33:49 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:05.358 16:33:49 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:05.358 16:33:49 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:07:05.358 16:33:49 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:05.358 16:33:49 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:05.358 16:33:49 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:05.358 16:33:49 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:05.358 16:33:49 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:07:05.358 16:33:49 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:05.358 16:33:49 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:05.358 16:33:49 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:05.358 16:33:49 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:05.358 16:33:49 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:07:05.358 16:33:49 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:05.358 16:33:49 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:05.358 16:33:50 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:05.358 16:33:50 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:05.358 16:33:50 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:07:05.358 16:33:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:07:05.358 16:33:50 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:05.358 16:33:50 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:05.358 16:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:05.358 16:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:05.358 16:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:05.358 16:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:05.358 16:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:05.358 16:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:05.358 16:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:05.358 16:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:05.358 BYT; 00:07:05.358 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:05.358 16:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:05.358 BYT; 00:07:05.358 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:05.358 16:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:05.358 16:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:05.358 16:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:05.358 16:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:05.358 16:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:05.358 16:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:05.358 16:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:05.359 16:33:50 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:05.359 16:33:50 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:05.359 16:33:50 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:05.359 16:33:50 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:05.359 16:33:50 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:05.359 16:33:50 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:05.359 16:33:50 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:05.359 16:33:50 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:05.359 16:33:50 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:05.359 16:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:05.359 16:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:05.359 16:33:50 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:05.359 16:33:50 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:05.359 16:33:50 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:05.359 16:33:50 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:05.359 16:33:50 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:05.359 16:33:50 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:05.359 16:33:50 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:05.359 16:33:50 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:05.359 16:33:50 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:05.359 16:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:05.359 16:33:50 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:06.290 The operation has completed successfully. 00:07:06.290 16:33:51 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:07.664 The operation has completed successfully. 00:07:07.664 16:33:52 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:07.664 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:08.229 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:08.229 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:08.229 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:08.229 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:08.229 16:33:53 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:08.229 16:33:53 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.229 16:33:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:08.229 [] 00:07:08.229 16:33:53 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.229 16:33:53 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:08.229 16:33:53 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:08.229 16:33:53 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:08.229 16:33:53 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:08.486 16:33:53 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:08.486 16:33:53 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.486 16:33:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:08.745 16:33:53 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.745 16:33:53 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:08.745 16:33:53 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.745 16:33:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:08.745 16:33:53 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.745 16:33:53 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:08.745 16:33:53 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:08.745 16:33:53 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.745 16:33:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:08.745 16:33:53 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.745 16:33:53 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:08.745 16:33:53 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.745 16:33:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:08.745 16:33:53 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.745 16:33:53 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:08.745 16:33:53 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.745 16:33:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:08.745 16:33:53 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.745 16:33:53 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:08.745 16:33:53 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:08.745 16:33:53 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:08.745 16:33:53 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.745 16:33:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:08.745 16:33:53 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.745 16:33:53 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:08.745 16:33:53 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:08.746 16:33:53 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "604fc72c-12ed-44da-9480-7295c96bee1e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "604fc72c-12ed-44da-9480-7295c96bee1e",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "a7264125-1d7c-4561-9153-9554bfec4e86"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a7264125-1d7c-4561-9153-9554bfec4e86",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "32eea400-6ab3-424c-8e4b-760d8635a3d4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "32eea400-6ab3-424c-8e4b-760d8635a3d4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "985e6cba-d845-487a-94bf-c046197b4e5f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "985e6cba-d845-487a-94bf-c046197b4e5f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "513ca781-4d91-40e7-a3dd-bf4f23633b85"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "513ca781-4d91-40e7-a3dd-bf4f23633b85",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:08.746 16:33:53 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:08.746 16:33:53 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:08.746 16:33:53 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:08.746 16:33:53 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 60702 00:07:08.746 16:33:53 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60702 ']' 00:07:08.746 16:33:53 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60702 00:07:08.746 16:33:53 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:07:08.746 16:33:53 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.746 16:33:53 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60702 00:07:08.746 16:33:53 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.746 16:33:53 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.746 killing process with pid 60702 00:07:08.746 16:33:53 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60702' 00:07:08.746 16:33:53 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60702 00:07:08.746 16:33:53 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60702 00:07:10.117 16:33:54 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:10.117 16:33:54 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:10.117 16:33:54 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:10.117 16:33:54 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.117 16:33:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:10.117 ************************************ 00:07:10.117 START TEST bdev_hello_world 00:07:10.117 ************************************ 00:07:10.117 16:33:54 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:10.117 [2024-11-20 16:33:54.856450] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:07:10.117 [2024-11-20 16:33:54.856567] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61321 ] 00:07:10.374 [2024-11-20 16:33:55.016285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.374 [2024-11-20 16:33:55.115623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.972 [2024-11-20 16:33:55.654398] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:10.972 [2024-11-20 16:33:55.654453] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:10.972 [2024-11-20 16:33:55.654476] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:10.972 [2024-11-20 16:33:55.656918] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:10.972 [2024-11-20 16:33:55.817067] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:10.972 [2024-11-20 16:33:55.817148] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:10.972 [2024-11-20 16:33:55.817395] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:10.972 00:07:10.972 [2024-11-20 16:33:55.817429] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:12.346 00:07:12.346 real 0m2.167s 00:07:12.346 user 0m1.829s 00:07:12.346 sys 0m0.229s 00:07:12.346 16:33:56 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.346 16:33:56 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:12.346 ************************************ 00:07:12.346 END TEST bdev_hello_world 00:07:12.346 ************************************ 00:07:12.346 16:33:56 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:12.346 16:33:56 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:12.346 16:33:56 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.346 16:33:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:12.346 ************************************ 00:07:12.346 START TEST bdev_bounds 00:07:12.346 ************************************ 00:07:12.346 16:33:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:12.346 16:33:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61363 00:07:12.346 16:33:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:12.346 Process bdevio pid: 61363 00:07:12.346 16:33:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61363' 00:07:12.346 16:33:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61363 00:07:12.346 16:33:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61363 ']' 00:07:12.346 16:33:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.346 16:33:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.346 16:33:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.346 16:33:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:12.346 16:33:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.346 16:33:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:12.346 [2024-11-20 16:33:57.060445] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:07:12.346 [2024-11-20 16:33:57.060562] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61363 ] 00:07:12.346 [2024-11-20 16:33:57.215281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:12.604 [2024-11-20 16:33:57.318999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.604 [2024-11-20 16:33:57.319301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.604 [2024-11-20 16:33:57.319322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.172 16:33:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.172 16:33:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:13.172 16:33:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:13.431 I/O targets: 00:07:13.431 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:13.431 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:13.431 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:13.431 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:13.431 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:13.431 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:13.431 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:13.431 00:07:13.431 00:07:13.431 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.431 http://cunit.sourceforge.net/ 00:07:13.431 00:07:13.431 00:07:13.431 Suite: bdevio tests on: Nvme3n1 00:07:13.431 Test: blockdev write read block ...passed 00:07:13.431 Test: blockdev write zeroes read block ...passed 00:07:13.431 Test: blockdev write zeroes read no split ...passed 00:07:13.431 Test: blockdev write zeroes read split ...passed 00:07:13.431 Test: blockdev write zeroes read split partial ...passed 00:07:13.431 Test: blockdev reset ...[2024-11-20 16:33:58.127604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:13.431 [2024-11-20 16:33:58.130423] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:13.431 passed 00:07:13.431 Test: blockdev write read 8 blocks ...passed 00:07:13.431 Test: blockdev write read size > 128k ...passed 00:07:13.431 Test: blockdev write read invalid size ...passed 00:07:13.431 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.431 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.431 Test: blockdev write read max offset ...passed 00:07:13.431 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.431 Test: blockdev writev readv 8 blocks ...passed 00:07:13.431 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.431 Test: blockdev writev readv block ...passed 00:07:13.431 Test: blockdev writev readv size > 128k ...passed 00:07:13.431 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.431 Test: blockdev comparev and writev ...[2024-11-20 16:33:58.136187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1a04000 len:0x1000 00:07:13.431 [2024-11-20 16:33:58.136301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:13.431 passed 00:07:13.431 Test: blockdev nvme passthru rw ...passed 00:07:13.431 Test: blockdev nvme passthru vendor specific ...[2024-11-20 16:33:58.137237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:13.431 [2024-11-20 16:33:58.137336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:07:13.431 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:07:13.431 passed 00:07:13.431 Test: blockdev copy ...passed 00:07:13.431 Suite: bdevio tests on: Nvme2n3 00:07:13.431 Test: blockdev write read block ...passed 00:07:13.431 Test: blockdev write zeroes read block ...passed 00:07:13.431 Test: blockdev write zeroes read no split ...passed 00:07:13.431 Test: blockdev write zeroes read split ...passed 00:07:13.431 Test: blockdev write zeroes read split partial ...passed 00:07:13.431 Test: blockdev reset ...[2024-11-20 16:33:58.180038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:13.431 [2024-11-20 16:33:58.183049] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:13.431 passed 00:07:13.431 Test: blockdev write read 8 blocks ...passed 00:07:13.431 Test: blockdev write read size > 128k ...passed 00:07:13.431 Test: blockdev write read invalid size ...passed 00:07:13.431 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.431 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.431 Test: blockdev write read max offset ...passed 00:07:13.431 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.431 Test: blockdev writev readv 8 blocks ...passed 00:07:13.431 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.431 Test: blockdev writev readv block ...passed 00:07:13.431 Test: blockdev writev readv size > 128k ...passed 00:07:13.431 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.431 Test: blockdev comparev and writev ...[2024-11-20 16:33:58.188678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1a02000 len:0x1000 00:07:13.431 [2024-11-20 16:33:58.188721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:13.431 passed 00:07:13.431 Test: blockdev nvme passthru rw ...passed 00:07:13.431 Test: blockdev nvme passthru vendor specific ...[2024-11-20 16:33:58.189205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:13.431 passed 00:07:13.431 Test: blockdev nvme admin passthru ...[2024-11-20 16:33:58.189236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:13.431 passed 00:07:13.432 Test: blockdev copy ...passed 00:07:13.432 Suite: bdevio tests on: Nvme2n2 00:07:13.432 Test: blockdev write read block ...passed 00:07:13.432 Test: blockdev write zeroes read block ...passed 00:07:13.432 Test: blockdev write zeroes read no split ...passed 00:07:13.432 Test: blockdev write zeroes read split ...passed 00:07:13.432 Test: blockdev write zeroes read split partial ...passed 00:07:13.432 Test: blockdev reset ...[2024-11-20 16:33:58.233145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:13.432 [2024-11-20 16:33:58.236137] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:13.432 passed 00:07:13.432 Test: blockdev write read 8 blocks ...passed 00:07:13.432 Test: blockdev write read size > 128k ...passed 00:07:13.432 Test: blockdev write read invalid size ...passed 00:07:13.432 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.432 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.432 Test: blockdev write read max offset ...passed 00:07:13.432 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.432 Test: blockdev writev readv 8 blocks ...passed 00:07:13.432 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.432 Test: blockdev writev readv block ...passed 00:07:13.432 Test: blockdev writev readv size > 128k ...passed 00:07:13.432 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.432 Test: blockdev comparev and writev ...[2024-11-20 16:33:58.242126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b5e38000 len:0x1000 00:07:13.432 passed 00:07:13.432 Test: blockdev nvme passthru rw ...[2024-11-20 16:33:58.242166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:13.432 passed 00:07:13.432 Test: blockdev nvme passthru vendor specific ...[2024-11-20 16:33:58.242699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:13.432 passed 00:07:13.432 Test: blockdev nvme admin passthru ...[2024-11-20 16:33:58.242726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:13.432 passed 00:07:13.432 Test: blockdev copy ...passed 00:07:13.432 Suite: bdevio tests on: Nvme2n1 00:07:13.432 Test: blockdev write read block ...passed 00:07:13.432 Test: blockdev write zeroes read block ...passed 00:07:13.432 Test: blockdev write zeroes read no split ...passed 00:07:13.432 Test: blockdev write zeroes read split ...passed 00:07:13.432 Test: blockdev write zeroes read split partial ...passed 00:07:13.432 Test: blockdev reset ...[2024-11-20 16:33:58.285150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:13.432 [2024-11-20 16:33:58.288148] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:13.432 passed 00:07:13.432 Test: blockdev write read 8 blocks ...passed 00:07:13.432 Test: blockdev write read size > 128k ...passed 00:07:13.432 Test: blockdev write read invalid size ...passed 00:07:13.432 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.432 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.432 Test: blockdev write read max offset ...passed 00:07:13.432 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.432 Test: blockdev writev readv 8 blocks ...passed 00:07:13.432 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.432 Test: blockdev writev readv block ...passed 00:07:13.432 Test: blockdev writev readv size > 128k ...passed 00:07:13.432 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.432 Test: blockdev comparev and writev ...[2024-11-20 16:33:58.293830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b5e34000 len:0x1000 00:07:13.432 [2024-11-20 16:33:58.293870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:13.432 passed 00:07:13.432 Test: blockdev nvme passthru rw ...passed 00:07:13.432 Test: blockdev nvme passthru vendor specific ...passed 00:07:13.432 Test: blockdev nvme admin passthru ...[2024-11-20 16:33:58.294534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:13.432 [2024-11-20 16:33:58.294560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:13.432 passed 00:07:13.432 Test: blockdev copy ...passed 00:07:13.432 Suite: bdevio tests on: Nvme1n1p2 00:07:13.432 Test: blockdev write read block ...passed 00:07:13.432 Test: blockdev write zeroes read block ...passed 00:07:13.432 Test: blockdev write zeroes read no split ...passed 00:07:13.690 Test: blockdev write zeroes read split ...passed 00:07:13.690 Test: blockdev write zeroes read split partial ...passed 00:07:13.690 Test: blockdev reset ...[2024-11-20 16:33:58.338715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:13.690 [2024-11-20 16:33:58.341242] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:13.690 passed 00:07:13.690 Test: blockdev write read 8 blocks ...passed 00:07:13.690 Test: blockdev write read size > 128k ...passed 00:07:13.690 Test: blockdev write read invalid size ...passed 00:07:13.691 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.691 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.691 Test: blockdev write read max offset ...passed 00:07:13.691 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.691 Test: blockdev writev readv 8 blocks ...passed 00:07:13.691 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.691 Test: blockdev writev readv block ...passed 00:07:13.691 Test: blockdev writev readv size > 128k ...passed 00:07:13.691 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.691 Test: blockdev comparev and writev ...[2024-11-20 16:33:58.346602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2b5e30000 len:0x1000 00:07:13.691 [2024-11-20 16:33:58.346636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:13.691 passed 00:07:13.691 Test: blockdev nvme passthru rw ...passed 00:07:13.691 Test: blockdev nvme passthru vendor specific ...passed 00:07:13.691 Test: blockdev nvme admin passthru ...passed 00:07:13.691 Test: blockdev copy ...passed 00:07:13.691 Suite: bdevio tests on: Nvme1n1p1 00:07:13.691 Test: blockdev write read block ...passed 00:07:13.691 Test: blockdev write zeroes read block ...passed 00:07:13.691 Test: blockdev write zeroes read no split ...passed 00:07:13.691 Test: blockdev write zeroes read split ...passed 00:07:13.691 Test: blockdev write zeroes read split partial ...passed 00:07:13.691 Test: blockdev reset ...[2024-11-20 16:33:58.388339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:13.691 [2024-11-20 16:33:58.392151] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:13.691 passed 00:07:13.691 Test: blockdev write read 8 blocks ...passed 00:07:13.691 Test: blockdev write read size > 128k ...passed 00:07:13.691 Test: blockdev write read invalid size ...passed 00:07:13.691 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.691 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.691 Test: blockdev write read max offset ...passed 00:07:13.691 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.691 Test: blockdev writev readv 8 blocks ...passed 00:07:13.691 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.691 Test: blockdev writev readv block ...passed 00:07:13.691 Test: blockdev writev readv size > 128k ...passed 00:07:13.691 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.691 Test: blockdev comparev and writev ...[2024-11-20 16:33:58.397713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x299c0e000 len:0x1000 00:07:13.691 [2024-11-20 16:33:58.397752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:13.691 passed 00:07:13.691 Test: blockdev nvme passthru rw ...passed 00:07:13.691 Test: blockdev nvme passthru vendor specific ...passed 00:07:13.691 Test: blockdev nvme admin passthru ...passed 00:07:13.691 Test: blockdev copy ...passed 00:07:13.691 Suite: bdevio tests on: Nvme0n1 00:07:13.691 Test: blockdev write read block ...passed 00:07:13.691 Test: blockdev write zeroes read block ...passed 00:07:13.691 Test: blockdev write zeroes read no split ...passed 00:07:13.691 Test: blockdev write zeroes read split ...passed 00:07:13.691 Test: blockdev write zeroes read split partial ...passed 00:07:13.691 Test: blockdev reset ...[2024-11-20 16:33:58.439731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:13.691 [2024-11-20 16:33:58.442694] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:13.691 passed 00:07:13.691 Test: blockdev write read 8 blocks ...passed 00:07:13.691 Test: blockdev write read size > 128k ...passed 00:07:13.691 Test: blockdev write read invalid size ...passed 00:07:13.691 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.691 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.691 Test: blockdev write read max offset ...passed 00:07:13.691 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.691 Test: blockdev writev readv 8 blocks ...passed 00:07:13.691 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.691 Test: blockdev writev readv block ...passed 00:07:13.691 Test: blockdev writev readv size > 128k ...passed 00:07:13.691 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.691 Test: blockdev comparev and writev ...passed 00:07:13.691 Test: blockdev nvme passthru rw ...[2024-11-20 16:33:58.447625] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:13.691 separate metadata which is not supported yet. 00:07:13.691 passed 00:07:13.691 Test: blockdev nvme passthru vendor specific ...[2024-11-20 16:33:58.447977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:13.691 [2024-11-20 16:33:58.448014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:13.691 passed 00:07:13.691 Test: blockdev nvme admin passthru ...passed 00:07:13.691 Test: blockdev copy ...passed 00:07:13.691 00:07:13.691 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.691 suites 7 7 n/a 0 0 00:07:13.691 tests 161 161 161 0 0 00:07:13.691 asserts 1025 1025 1025 0 n/a 00:07:13.691 00:07:13.691 Elapsed time = 0.992 seconds 00:07:13.691 0 00:07:13.691 16:33:58 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61363 00:07:13.691 16:33:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61363 ']' 00:07:13.691 16:33:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61363 00:07:13.691 16:33:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:13.691 16:33:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.691 16:33:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61363 00:07:13.691 16:33:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.691 16:33:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.691 killing process with pid 61363 00:07:13.691 16:33:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61363' 00:07:13.691 16:33:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61363 00:07:13.691 16:33:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61363 00:07:14.624 16:33:59 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:14.624 00:07:14.624 real 0m2.146s 00:07:14.624 user 0m5.612s 00:07:14.624 sys 0m0.282s 00:07:14.624 16:33:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.624 16:33:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:14.624 ************************************ 00:07:14.624 END TEST bdev_bounds 00:07:14.624 ************************************ 00:07:14.624 16:33:59 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:14.624 16:33:59 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:14.624 16:33:59 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.624 16:33:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:14.624 ************************************ 00:07:14.624 START TEST bdev_nbd 00:07:14.624 ************************************ 00:07:14.624 16:33:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:14.624 16:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:14.624 16:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:14.624 16:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.624 16:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:14.624 16:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:14.624 16:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:14.624 16:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:14.624 16:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:14.625 16:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:14.625 16:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:14.625 16:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:14.625 16:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:14.625 16:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:14.625 16:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:14.625 16:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:14.625 16:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61417 00:07:14.625 16:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:14.625 16:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61417 /var/tmp/spdk-nbd.sock 00:07:14.625 16:33:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61417 ']' 00:07:14.625 16:33:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:14.625 16:33:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.625 16:33:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:14.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:14.625 16:33:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.625 16:33:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:14.625 16:33:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:14.625 [2024-11-20 16:33:59.251863] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:07:14.625 [2024-11-20 16:33:59.251985] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:14.625 [2024-11-20 16:33:59.413119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.882 [2024-11-20 16:33:59.515955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.447 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.447 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:15.447 16:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:15.447 16:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.447 16:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:15.447 16:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:15.447 16:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:15.447 16:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.447 16:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:15.447 16:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:15.447 16:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:15.447 16:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:15.447 16:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:15.447 16:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:15.447 16:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:15.706 16:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:15.706 16:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:15.706 16:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:15.706 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:15.706 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:15.706 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:15.706 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:15.706 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:15.706 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:15.706 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:15.706 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:15.706 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.706 1+0 records in 00:07:15.706 1+0 records out 00:07:15.706 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532086 s, 7.7 MB/s 00:07:15.706 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.706 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:15.706 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.706 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:15.706 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:15.706 16:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:15.706 16:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:15.706 16:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:15.706 16:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:15.706 16:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:15.706 16:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:15.706 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:15.706 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:15.706 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:15.706 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:15.706 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:15.706 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:15.706 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:15.706 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:15.964 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.964 1+0 records in 00:07:15.964 1+0 records out 00:07:15.964 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000743937 s, 5.5 MB/s 00:07:15.964 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.964 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:15.964 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.964 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:15.964 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:15.964 16:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:15.964 16:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:15.964 16:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:15.964 16:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:15.964 16:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:15.964 16:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:15.964 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:15.964 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:15.964 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:15.964 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:15.964 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:15.964 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:15.964 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:15.964 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:15.964 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.964 1+0 records in 00:07:15.964 1+0 records out 00:07:15.964 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382993 s, 10.7 MB/s 00:07:15.964 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.964 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:15.964 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.964 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:15.964 16:34:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:15.964 16:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:15.964 16:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:15.964 16:34:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:16.223 16:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:16.223 16:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:16.223 16:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:16.223 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:16.223 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:16.223 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:16.223 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:16.223 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:16.223 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:16.223 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:16.223 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:16.223 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.223 1+0 records in 00:07:16.223 1+0 records out 00:07:16.223 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429142 s, 9.5 MB/s 00:07:16.223 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.223 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:16.223 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.223 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:16.223 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:16.223 16:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:16.223 16:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:16.223 16:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:16.481 16:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:16.481 16:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:16.481 16:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:16.481 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:16.481 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:16.481 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:16.481 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:16.481 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:16.481 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:16.481 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:16.481 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:16.481 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.481 1+0 records in 00:07:16.481 1+0 records out 00:07:16.481 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435312 s, 9.4 MB/s 00:07:16.481 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.481 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:16.481 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.481 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:16.481 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:16.481 16:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:16.481 16:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:16.481 16:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:16.741 16:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:16.741 16:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:16.741 16:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:16.741 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:16.741 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:16.741 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:16.741 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:16.741 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:16.741 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:16.741 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:16.741 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:16.741 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.741 1+0 records in 00:07:16.741 1+0 records out 00:07:16.741 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00144741 s, 2.8 MB/s 00:07:16.741 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.741 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:16.741 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.741 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:16.742 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:16.742 16:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:16.742 16:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:16.742 16:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:17.000 16:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:17.000 16:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:17.000 16:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:17.000 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:17.000 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:17.000 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:17.000 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:17.000 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:17.000 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:17.000 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:17.000 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:17.000 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.000 1+0 records in 00:07:17.000 1+0 records out 00:07:17.000 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00133809 s, 3.1 MB/s 00:07:17.000 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.000 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:17.000 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.000 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:17.000 16:34:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:17.000 16:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:17.000 16:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:17.000 16:34:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:17.258 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:17.258 { 00:07:17.258 "nbd_device": "/dev/nbd0", 00:07:17.258 "bdev_name": "Nvme0n1" 00:07:17.258 }, 00:07:17.259 { 00:07:17.259 "nbd_device": "/dev/nbd1", 00:07:17.259 "bdev_name": "Nvme1n1p1" 00:07:17.259 }, 00:07:17.259 { 00:07:17.259 "nbd_device": "/dev/nbd2", 00:07:17.259 "bdev_name": "Nvme1n1p2" 00:07:17.259 }, 00:07:17.259 { 00:07:17.259 "nbd_device": "/dev/nbd3", 00:07:17.259 "bdev_name": "Nvme2n1" 00:07:17.259 }, 00:07:17.259 { 00:07:17.259 "nbd_device": "/dev/nbd4", 00:07:17.259 "bdev_name": "Nvme2n2" 00:07:17.259 }, 00:07:17.259 { 00:07:17.259 "nbd_device": "/dev/nbd5", 00:07:17.259 "bdev_name": "Nvme2n3" 00:07:17.259 }, 00:07:17.259 { 00:07:17.259 "nbd_device": "/dev/nbd6", 00:07:17.259 "bdev_name": "Nvme3n1" 00:07:17.259 } 00:07:17.259 ]' 00:07:17.259 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:17.259 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:17.259 { 00:07:17.259 "nbd_device": "/dev/nbd0", 00:07:17.259 "bdev_name": "Nvme0n1" 00:07:17.259 }, 00:07:17.259 { 00:07:17.259 "nbd_device": "/dev/nbd1", 00:07:17.259 "bdev_name": "Nvme1n1p1" 00:07:17.259 }, 00:07:17.259 { 00:07:17.259 "nbd_device": "/dev/nbd2", 00:07:17.259 "bdev_name": "Nvme1n1p2" 00:07:17.259 }, 00:07:17.259 { 00:07:17.259 "nbd_device": "/dev/nbd3", 00:07:17.259 "bdev_name": "Nvme2n1" 00:07:17.259 }, 00:07:17.259 { 00:07:17.259 "nbd_device": "/dev/nbd4", 00:07:17.259 "bdev_name": "Nvme2n2" 00:07:17.259 }, 00:07:17.259 { 00:07:17.259 "nbd_device": "/dev/nbd5", 00:07:17.259 "bdev_name": "Nvme2n3" 00:07:17.259 }, 00:07:17.259 { 00:07:17.259 "nbd_device": "/dev/nbd6", 00:07:17.259 "bdev_name": "Nvme3n1" 00:07:17.259 } 00:07:17.259 ]' 00:07:17.259 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:17.259 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:17.259 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.259 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:17.259 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:17.259 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:17.259 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.259 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:17.517 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:17.517 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:17.517 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:17.517 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.517 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.517 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:17.517 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.517 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.517 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.517 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:17.776 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:17.776 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:17.776 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:17.776 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.776 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.776 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:17.776 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.776 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.776 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.776 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:18.036 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:18.036 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:18.036 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:18.036 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.036 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.036 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:18.036 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.036 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.036 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.036 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:18.294 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:18.294 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:18.294 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:18.294 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.294 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.294 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:18.294 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.294 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.294 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.294 16:34:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:18.551 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:18.551 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:18.551 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:18.551 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.551 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.551 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:18.551 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.551 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.551 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.551 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:18.551 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:18.551 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:18.551 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:18.551 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.551 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.551 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:18.551 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.551 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.551 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.551 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:18.808 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:18.808 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:18.808 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:18.808 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.808 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.808 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:18.808 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.808 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.809 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:18.809 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.809 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:19.067 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:19.067 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:19.067 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:19.067 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:19.067 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:19.067 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:19.067 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:19.067 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:19.067 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:19.067 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:19.067 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:19.067 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:19.067 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:19.067 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.067 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:19.067 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:19.067 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:19.067 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:19.067 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:19.067 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.067 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:19.067 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:19.067 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:19.067 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:19.067 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:19.067 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:19.067 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:19.067 16:34:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:19.326 /dev/nbd0 00:07:19.326 16:34:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:19.326 16:34:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:19.326 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:19.326 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:19.326 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:19.326 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:19.326 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:19.326 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:19.326 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:19.326 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:19.326 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.326 1+0 records in 00:07:19.326 1+0 records out 00:07:19.326 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378179 s, 10.8 MB/s 00:07:19.326 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.326 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:19.326 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.326 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:19.326 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:19.326 16:34:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.326 16:34:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:19.326 16:34:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:19.583 /dev/nbd1 00:07:19.583 16:34:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:19.583 16:34:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:19.583 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:19.583 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:19.583 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:19.583 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:19.584 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:19.584 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:19.584 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:19.584 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:19.584 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.584 1+0 records in 00:07:19.584 1+0 records out 00:07:19.584 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336812 s, 12.2 MB/s 00:07:19.584 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.584 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:19.584 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.584 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:19.584 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:19.584 16:34:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.584 16:34:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:19.584 16:34:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:19.841 /dev/nbd10 00:07:19.841 16:34:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:19.841 16:34:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:19.841 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:19.841 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:19.841 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:19.841 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:19.841 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:19.841 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:19.841 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:19.841 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:19.841 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.841 1+0 records in 00:07:19.841 1+0 records out 00:07:19.841 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000678523 s, 6.0 MB/s 00:07:19.841 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.841 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:19.841 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.841 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:19.841 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:19.841 16:34:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.841 16:34:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:19.841 16:34:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:20.099 /dev/nbd11 00:07:20.099 16:34:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:20.099 16:34:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:20.099 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:20.099 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:20.099 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:20.099 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:20.099 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:20.099 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:20.099 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:20.099 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:20.099 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.099 1+0 records in 00:07:20.099 1+0 records out 00:07:20.099 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317528 s, 12.9 MB/s 00:07:20.099 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.099 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:20.099 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.099 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:20.099 16:34:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:20.099 16:34:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:20.099 16:34:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:20.099 16:34:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:20.357 /dev/nbd12 00:07:20.357 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:20.357 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:20.357 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:20.357 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:20.357 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:20.357 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:20.357 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:20.357 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:20.357 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:20.357 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:20.357 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.357 1+0 records in 00:07:20.357 1+0 records out 00:07:20.357 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381542 s, 10.7 MB/s 00:07:20.357 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.357 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:20.357 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.357 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:20.357 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:20.357 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:20.357 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:20.357 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:20.615 /dev/nbd13 00:07:20.615 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:20.615 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:20.615 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:20.615 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:20.615 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:20.615 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:20.615 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:20.615 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:20.615 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:20.615 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:20.615 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.615 1+0 records in 00:07:20.615 1+0 records out 00:07:20.615 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000380471 s, 10.8 MB/s 00:07:20.615 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.615 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:20.615 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.615 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:20.615 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:20.615 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:20.615 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:20.615 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:20.615 /dev/nbd14 00:07:20.873 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:20.873 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:20.873 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:20.873 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:20.873 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:20.873 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:20.873 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:20.873 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:20.873 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:20.873 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:20.873 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.873 1+0 records in 00:07:20.873 1+0 records out 00:07:20.873 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432296 s, 9.5 MB/s 00:07:20.873 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.873 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:20.873 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.873 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:20.873 16:34:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:20.873 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:20.873 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:20.873 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:20.874 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.874 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:20.874 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:20.874 { 00:07:20.874 "nbd_device": "/dev/nbd0", 00:07:20.874 "bdev_name": "Nvme0n1" 00:07:20.874 }, 00:07:20.874 { 00:07:20.874 "nbd_device": "/dev/nbd1", 00:07:20.874 "bdev_name": "Nvme1n1p1" 00:07:20.874 }, 00:07:20.874 { 00:07:20.874 "nbd_device": "/dev/nbd10", 00:07:20.874 "bdev_name": "Nvme1n1p2" 00:07:20.874 }, 00:07:20.874 { 00:07:20.874 "nbd_device": "/dev/nbd11", 00:07:20.874 "bdev_name": "Nvme2n1" 00:07:20.874 }, 00:07:20.874 { 00:07:20.874 "nbd_device": "/dev/nbd12", 00:07:20.874 "bdev_name": "Nvme2n2" 00:07:20.874 }, 00:07:20.874 { 00:07:20.874 "nbd_device": "/dev/nbd13", 00:07:20.874 "bdev_name": "Nvme2n3" 00:07:20.874 }, 00:07:20.874 { 00:07:20.874 "nbd_device": "/dev/nbd14", 00:07:20.874 "bdev_name": "Nvme3n1" 00:07:20.874 } 00:07:20.874 ]' 00:07:20.874 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:20.874 { 00:07:20.874 "nbd_device": "/dev/nbd0", 00:07:20.874 "bdev_name": "Nvme0n1" 00:07:20.874 }, 00:07:20.874 { 00:07:20.874 "nbd_device": "/dev/nbd1", 00:07:20.874 "bdev_name": "Nvme1n1p1" 00:07:20.874 }, 00:07:20.874 { 00:07:20.874 "nbd_device": "/dev/nbd10", 00:07:20.874 "bdev_name": "Nvme1n1p2" 00:07:20.874 }, 00:07:20.874 { 00:07:20.874 "nbd_device": "/dev/nbd11", 00:07:20.874 "bdev_name": "Nvme2n1" 00:07:20.874 }, 00:07:20.874 { 00:07:20.874 "nbd_device": "/dev/nbd12", 00:07:20.874 "bdev_name": "Nvme2n2" 00:07:20.874 }, 00:07:20.874 { 00:07:20.874 "nbd_device": "/dev/nbd13", 00:07:20.874 "bdev_name": "Nvme2n3" 00:07:20.874 }, 00:07:20.874 { 00:07:20.874 "nbd_device": "/dev/nbd14", 00:07:20.874 "bdev_name": "Nvme3n1" 00:07:20.874 } 00:07:20.874 ]' 00:07:20.874 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:21.132 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:21.132 /dev/nbd1 00:07:21.132 /dev/nbd10 00:07:21.132 /dev/nbd11 00:07:21.132 /dev/nbd12 00:07:21.132 /dev/nbd13 00:07:21.132 /dev/nbd14' 00:07:21.132 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:21.132 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:21.132 /dev/nbd1 00:07:21.132 /dev/nbd10 00:07:21.132 /dev/nbd11 00:07:21.132 /dev/nbd12 00:07:21.132 /dev/nbd13 00:07:21.132 /dev/nbd14' 00:07:21.132 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:21.132 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:21.133 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:21.133 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:21.133 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:21.133 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:21.133 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:21.133 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:21.133 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:21.133 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:21.133 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:21.133 256+0 records in 00:07:21.133 256+0 records out 00:07:21.133 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00474493 s, 221 MB/s 00:07:21.133 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:21.133 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:21.133 256+0 records in 00:07:21.133 256+0 records out 00:07:21.133 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.110941 s, 9.5 MB/s 00:07:21.133 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:21.133 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:21.133 256+0 records in 00:07:21.133 256+0 records out 00:07:21.133 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0760188 s, 13.8 MB/s 00:07:21.133 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:21.133 16:34:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:21.390 256+0 records in 00:07:21.390 256+0 records out 00:07:21.390 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0737007 s, 14.2 MB/s 00:07:21.390 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:21.390 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:21.390 256+0 records in 00:07:21.390 256+0 records out 00:07:21.390 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0816809 s, 12.8 MB/s 00:07:21.390 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:21.390 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:21.390 256+0 records in 00:07:21.390 256+0 records out 00:07:21.390 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0732254 s, 14.3 MB/s 00:07:21.390 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:21.390 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:21.648 256+0 records in 00:07:21.648 256+0 records out 00:07:21.648 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0741858 s, 14.1 MB/s 00:07:21.648 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:21.648 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:21.648 256+0 records in 00:07:21.648 256+0 records out 00:07:21.648 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0744157 s, 14.1 MB/s 00:07:21.648 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:21.648 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:21.648 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:21.648 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:21.648 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:21.648 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:21.648 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:21.648 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:21.648 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:21.648 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:21.648 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:21.648 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:21.648 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:21.648 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:21.648 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:21.648 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:21.648 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:21.648 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:21.648 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:21.648 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:21.648 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:21.648 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:21.648 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:21.648 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.648 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:21.648 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:21.648 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:21.648 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.648 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:21.905 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:21.905 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:21.905 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:21.905 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.905 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.905 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:21.905 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.905 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.905 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.905 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:22.162 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:22.162 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:22.162 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:22.162 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.162 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.162 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:22.162 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:22.162 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.162 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.162 16:34:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:22.420 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:22.420 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:22.420 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:22.420 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.420 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.420 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:22.420 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:22.420 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.420 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.420 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:22.420 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:22.420 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:22.420 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:22.420 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.420 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.420 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:22.420 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:22.420 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.421 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.421 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:22.681 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:22.681 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:22.681 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:22.681 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.681 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.681 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:22.681 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:22.681 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.681 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.681 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:22.938 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:22.938 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:22.938 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:22.938 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.938 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.938 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:22.938 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:22.938 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.938 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.938 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:23.196 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:23.196 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:23.196 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:23.196 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.196 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.196 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:23.196 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:23.196 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.196 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:23.196 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.196 16:34:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:23.455 16:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:23.455 16:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:23.455 16:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:23.455 16:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:23.455 16:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:23.455 16:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:23.455 16:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:23.455 16:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:23.455 16:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:23.455 16:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:23.455 16:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:23.455 16:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:23.455 16:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:23.455 16:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.455 16:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:23.455 16:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:23.455 malloc_lvol_verify 00:07:23.455 16:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:23.713 a87c5920-93fd-47af-a446-e1b6d293eb23 00:07:23.713 16:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:23.971 26bcbb81-f781-473b-97ae-08b02a714434 00:07:23.971 16:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:24.228 /dev/nbd0 00:07:24.228 16:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:24.228 16:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:24.228 16:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:24.228 16:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:24.229 16:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:24.229 mke2fs 1.47.0 (5-Feb-2023) 00:07:24.229 Discarding device blocks: 0/4096 done 00:07:24.229 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:24.229 00:07:24.229 Allocating group tables: 0/1 done 00:07:24.229 Writing inode tables: 0/1 done 00:07:24.229 Creating journal (1024 blocks): done 00:07:24.229 Writing superblocks and filesystem accounting information: 0/1 done 00:07:24.229 00:07:24.229 16:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:24.229 16:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.229 16:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:24.229 16:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:24.229 16:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:24.229 16:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:24.229 16:34:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:24.485 16:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:24.485 16:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:24.485 16:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:24.485 16:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:24.485 16:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:24.485 16:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:24.485 16:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:24.485 16:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:24.485 16:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61417 00:07:24.485 16:34:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61417 ']' 00:07:24.485 16:34:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61417 00:07:24.485 16:34:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:24.485 16:34:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.485 16:34:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61417 00:07:24.485 16:34:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:24.485 16:34:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:24.485 killing process with pid 61417 00:07:24.485 16:34:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61417' 00:07:24.485 16:34:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61417 00:07:24.485 16:34:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61417 00:07:25.418 16:34:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:25.418 00:07:25.418 real 0m10.761s 00:07:25.418 user 0m15.389s 00:07:25.418 sys 0m3.565s 00:07:25.418 16:34:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.418 16:34:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:25.418 ************************************ 00:07:25.418 END TEST bdev_nbd 00:07:25.418 ************************************ 00:07:25.419 16:34:09 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:25.419 16:34:09 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:25.419 skipping fio tests on NVMe due to multi-ns failures. 00:07:25.419 16:34:09 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:25.419 16:34:09 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:25.419 16:34:09 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:25.419 16:34:09 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:25.419 16:34:09 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:25.419 16:34:09 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.419 16:34:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:25.419 ************************************ 00:07:25.419 START TEST bdev_verify 00:07:25.419 ************************************ 00:07:25.419 16:34:09 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:25.419 [2024-11-20 16:34:10.041544] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:07:25.419 [2024-11-20 16:34:10.041671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61832 ] 00:07:25.419 [2024-11-20 16:34:10.201231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:25.676 [2024-11-20 16:34:10.309974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.677 [2024-11-20 16:34:10.310130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.244 Running I/O for 5 seconds... 00:07:28.564 22592.00 IOPS, 88.25 MiB/s [2024-11-20T16:34:14.384Z] 22592.00 IOPS, 88.25 MiB/s [2024-11-20T16:34:15.316Z] 22314.67 IOPS, 87.17 MiB/s [2024-11-20T16:34:16.286Z] 21323.50 IOPS, 83.29 MiB/s [2024-11-20T16:34:16.286Z] 21054.00 IOPS, 82.24 MiB/s 00:07:31.400 Latency(us) 00:07:31.400 [2024-11-20T16:34:16.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:31.400 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:31.400 Verification LBA range: start 0x0 length 0xbd0bd 00:07:31.400 Nvme0n1 : 5.08 1512.36 5.91 0.00 0.00 84430.07 13107.20 143574.25 00:07:31.400 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:31.400 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:31.400 Nvme0n1 : 5.08 1451.86 5.67 0.00 0.00 87685.56 17140.18 153253.42 00:07:31.400 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:31.400 Verification LBA range: start 0x0 length 0x4ff80 00:07:31.400 Nvme1n1p1 : 5.08 1511.41 5.90 0.00 0.00 84339.56 10939.47 141961.06 00:07:31.400 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:31.400 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:31.400 Nvme1n1p1 : 5.08 1449.81 5.66 0.00 0.00 87778.05 18652.55 138734.67 00:07:31.400 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:31.400 Verification LBA range: start 0x0 length 0x4ff7f 00:07:31.400 Nvme1n1p2 : 5.08 1510.37 5.90 0.00 0.00 84238.91 9376.69 139541.27 00:07:31.400 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:31.400 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:31.400 Nvme1n1p2 : 5.09 1450.37 5.67 0.00 0.00 87550.15 10435.35 137928.07 00:07:31.400 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:31.400 Verification LBA range: start 0x0 length 0x80000 00:07:31.400 Nvme2n1 : 5.08 1511.30 5.90 0.00 0.00 84084.80 8872.57 136314.88 00:07:31.400 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:31.400 Verification LBA range: start 0x80000 length 0x80000 00:07:31.400 Nvme2n1 : 5.09 1455.48 5.69 0.00 0.00 87222.79 10889.06 133895.09 00:07:31.400 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:31.400 Verification LBA range: start 0x0 length 0x80000 00:07:31.400 Nvme2n2 : 5.08 1510.84 5.90 0.00 0.00 83955.18 9679.16 133895.09 00:07:31.400 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:31.400 Verification LBA range: start 0x80000 length 0x80000 00:07:31.400 Nvme2n2 : 5.10 1455.70 5.69 0.00 0.00 87059.00 10334.52 131475.30 00:07:31.400 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:31.400 Verification LBA range: start 0x0 length 0x80000 00:07:31.400 Nvme2n3 : 5.09 1507.65 5.89 0.00 0.00 84004.63 13107.20 131475.30 00:07:31.400 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:31.400 Verification LBA range: start 0x80000 length 0x80000 00:07:31.400 Nvme2n3 : 5.10 1454.75 5.68 0.00 0.00 86932.02 9779.99 127442.31 00:07:31.400 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:31.400 Verification LBA range: start 0x0 length 0x20000 00:07:31.400 Nvme3n1 : 5.09 1509.39 5.90 0.00 0.00 83716.24 9477.51 129862.10 00:07:31.400 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:31.400 Verification LBA range: start 0x20000 length 0x20000 00:07:31.400 Nvme3n1 : 5.10 1456.11 5.69 0.00 0.00 86745.84 6049.48 125022.52 00:07:31.400 [2024-11-20T16:34:16.286Z] =================================================================================================================== 00:07:31.400 [2024-11-20T16:34:16.286Z] Total : 20747.40 81.04 0.00 0.00 85666.08 6049.48 153253.42 00:07:32.776 00:07:32.776 real 0m7.265s 00:07:32.776 user 0m13.588s 00:07:32.777 sys 0m0.225s 00:07:32.777 16:34:17 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.777 16:34:17 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:32.777 ************************************ 00:07:32.777 END TEST bdev_verify 00:07:32.777 ************************************ 00:07:32.777 16:34:17 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:32.777 16:34:17 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:32.777 16:34:17 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.777 16:34:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:32.777 ************************************ 00:07:32.777 START TEST bdev_verify_big_io 00:07:32.777 ************************************ 00:07:32.777 16:34:17 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:32.777 [2024-11-20 16:34:17.351793] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:07:32.777 [2024-11-20 16:34:17.351916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61925 ] 00:07:32.777 [2024-11-20 16:34:17.514988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:32.777 [2024-11-20 16:34:17.624951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.777 [2024-11-20 16:34:17.625227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.716 Running I/O for 5 seconds... 00:07:39.817 1213.00 IOPS, 75.81 MiB/s [2024-11-20T16:34:24.703Z] 2627.00 IOPS, 164.19 MiB/s [2024-11-20T16:34:25.275Z] 3000.33 IOPS, 187.52 MiB/s 00:07:40.389 Latency(us) 00:07:40.389 [2024-11-20T16:34:25.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:40.389 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.389 Verification LBA range: start 0x0 length 0xbd0b 00:07:40.389 Nvme0n1 : 5.97 96.71 6.04 0.00 0.00 1221597.16 13913.80 1961643.72 00:07:40.389 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.389 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:40.389 Nvme0n1 : 5.79 80.13 5.01 0.00 0.00 1511712.78 18047.61 1768060.46 00:07:40.389 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.389 Verification LBA range: start 0x0 length 0x4ff8 00:07:40.389 Nvme1n1p1 : 6.08 100.54 6.28 0.00 0.00 1127425.80 144380.85 1651910.50 00:07:40.389 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.389 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:40.389 Nvme1n1p1 : 5.96 100.15 6.26 0.00 0.00 1183853.92 87112.47 1116330.14 00:07:40.389 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.389 Verification LBA range: start 0x0 length 0x4ff7 00:07:40.389 Nvme1n1p2 : 6.17 108.27 6.77 0.00 0.00 1015267.05 88725.66 1335724.50 00:07:40.389 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.389 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:40.389 Nvme1n1p2 : 5.96 103.48 6.47 0.00 0.00 1114217.89 88725.66 1090519.04 00:07:40.389 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.389 Verification LBA range: start 0x0 length 0x8000 00:07:40.389 Nvme2n1 : 6.21 108.78 6.80 0.00 0.00 960619.20 40531.50 1232480.10 00:07:40.389 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.389 Verification LBA range: start 0x8000 length 0x8000 00:07:40.389 Nvme2n1 : 5.97 107.29 6.71 0.00 0.00 1052562.04 77433.30 1232480.10 00:07:40.389 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.389 Verification LBA range: start 0x0 length 0x8000 00:07:40.389 Nvme2n2 : 6.35 120.30 7.52 0.00 0.00 839067.46 26819.35 1845493.76 00:07:40.389 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.389 Verification LBA range: start 0x8000 length 0x8000 00:07:40.389 Nvme2n2 : 6.08 109.87 6.87 0.00 0.00 990654.98 82676.18 1264743.98 00:07:40.389 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.389 Verification LBA range: start 0x0 length 0x8000 00:07:40.389 Nvme2n3 : 6.49 148.80 9.30 0.00 0.00 653224.42 16535.24 1897115.96 00:07:40.389 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.389 Verification LBA range: start 0x8000 length 0x8000 00:07:40.389 Nvme2n3 : 6.18 119.75 7.48 0.00 0.00 888159.68 23290.49 1277649.53 00:07:40.389 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.389 Verification LBA range: start 0x0 length 0x2000 00:07:40.389 Nvme3n1 : 6.70 220.97 13.81 0.00 0.00 421780.85 705.77 1961643.72 00:07:40.389 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.389 Verification LBA range: start 0x2000 length 0x2000 00:07:40.389 Nvme3n1 : 6.19 127.83 7.99 0.00 0.00 805475.11 5293.29 1303460.63 00:07:40.389 [2024-11-20T16:34:25.275Z] =================================================================================================================== 00:07:40.389 [2024-11-20T16:34:25.275Z] Total : 1652.86 103.30 0.00 0.00 910715.26 705.77 1961643.72 00:07:42.375 00:07:42.375 real 0m9.908s 00:07:42.375 user 0m18.815s 00:07:42.375 sys 0m0.243s 00:07:42.375 16:34:27 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.375 ************************************ 00:07:42.375 END TEST bdev_verify_big_io 00:07:42.375 ************************************ 00:07:42.375 16:34:27 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:42.375 16:34:27 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:42.375 16:34:27 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:42.375 16:34:27 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.375 16:34:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:42.638 ************************************ 00:07:42.638 START TEST bdev_write_zeroes 00:07:42.638 ************************************ 00:07:42.638 16:34:27 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:42.638 [2024-11-20 16:34:27.324475] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:07:42.638 [2024-11-20 16:34:27.324603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62045 ] 00:07:42.638 [2024-11-20 16:34:27.485724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.900 [2024-11-20 16:34:27.651990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.472 Running I/O for 1 seconds... 00:07:44.414 39374.00 IOPS, 153.80 MiB/s 00:07:44.414 Latency(us) 00:07:44.414 [2024-11-20T16:34:29.300Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:44.414 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:44.414 Nvme0n1 : 1.03 5565.19 21.74 0.00 0.00 22907.75 8721.33 258111.02 00:07:44.414 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:44.414 Nvme1n1p1 : 1.03 5668.92 22.14 0.00 0.00 22463.03 11897.30 245205.46 00:07:44.414 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:44.414 Nvme1n1p2 : 1.03 5661.45 22.12 0.00 0.00 22417.30 12300.60 246818.66 00:07:44.414 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:44.414 Nvme2n1 : 1.03 5703.47 22.28 0.00 0.00 22146.33 10082.46 246818.66 00:07:44.414 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:44.414 Nvme2n2 : 1.03 5696.80 22.25 0.00 0.00 22123.97 10334.52 246818.66 00:07:44.414 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:44.414 Nvme2n3 : 1.03 5752.07 22.47 0.00 0.00 21861.80 10082.46 237139.50 00:07:44.414 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:44.414 Nvme3n1 : 1.03 5648.54 22.06 0.00 0.00 22208.29 10737.82 243592.27 00:07:44.414 [2024-11-20T16:34:29.300Z] =================================================================================================================== 00:07:44.414 [2024-11-20T16:34:29.301Z] Total : 39696.45 155.06 0.00 0.00 22300.47 8721.33 258111.02 00:07:45.358 00:07:45.358 real 0m2.791s 00:07:45.358 user 0m2.474s 00:07:45.358 sys 0m0.197s 00:07:45.358 ************************************ 00:07:45.358 END TEST bdev_write_zeroes 00:07:45.358 ************************************ 00:07:45.358 16:34:30 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.358 16:34:30 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:45.358 16:34:30 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:45.358 16:34:30 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:45.358 16:34:30 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.358 16:34:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:45.358 ************************************ 00:07:45.358 START TEST bdev_json_nonenclosed 00:07:45.358 ************************************ 00:07:45.358 16:34:30 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:45.358 [2024-11-20 16:34:30.185541] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:07:45.358 [2024-11-20 16:34:30.185675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62099 ] 00:07:45.619 [2024-11-20 16:34:30.348540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.619 [2024-11-20 16:34:30.459692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.619 [2024-11-20 16:34:30.459790] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:45.619 [2024-11-20 16:34:30.459808] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:45.619 [2024-11-20 16:34:30.459818] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:45.880 00:07:45.880 real 0m0.531s 00:07:45.880 user 0m0.328s 00:07:45.880 sys 0m0.097s 00:07:45.880 16:34:30 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.880 ************************************ 00:07:45.880 END TEST bdev_json_nonenclosed 00:07:45.880 ************************************ 00:07:45.880 16:34:30 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:45.880 16:34:30 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:45.880 16:34:30 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:45.880 16:34:30 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.880 16:34:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:45.880 ************************************ 00:07:45.880 START TEST bdev_json_nonarray 00:07:45.880 ************************************ 00:07:45.880 16:34:30 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:46.141 [2024-11-20 16:34:30.782323] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:07:46.141 [2024-11-20 16:34:30.782472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62130 ] 00:07:46.141 [2024-11-20 16:34:30.942249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.402 [2024-11-20 16:34:31.056195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.402 [2024-11-20 16:34:31.056314] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:46.402 [2024-11-20 16:34:31.056334] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:46.402 [2024-11-20 16:34:31.056343] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:46.402 00:07:46.402 real 0m0.534s 00:07:46.402 user 0m0.325s 00:07:46.402 sys 0m0.104s 00:07:46.402 16:34:31 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.402 16:34:31 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:46.402 ************************************ 00:07:46.402 END TEST bdev_json_nonarray 00:07:46.402 ************************************ 00:07:46.664 16:34:31 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:46.664 16:34:31 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:46.664 16:34:31 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:46.664 16:34:31 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:46.664 16:34:31 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.664 16:34:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:46.664 ************************************ 00:07:46.664 START TEST bdev_gpt_uuid 00:07:46.664 ************************************ 00:07:46.664 16:34:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:46.664 16:34:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:46.664 16:34:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:46.664 16:34:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62150 00:07:46.664 16:34:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:46.664 16:34:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:46.664 16:34:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62150 00:07:46.664 16:34:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62150 ']' 00:07:46.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.664 16:34:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.664 16:34:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:46.664 16:34:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.664 16:34:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:46.664 16:34:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:46.664 [2024-11-20 16:34:31.399525] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:07:46.664 [2024-11-20 16:34:31.399668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62150 ] 00:07:46.926 [2024-11-20 16:34:31.562218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.926 [2024-11-20 16:34:31.683772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.867 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.867 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:47.867 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:47.867 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.867 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:47.867 Some configs were skipped because the RPC state that can call them passed over. 00:07:47.867 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.867 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:47.867 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.867 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:47.867 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.867 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:47.867 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.867 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:48.128 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.128 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:48.128 { 00:07:48.128 "name": "Nvme1n1p1", 00:07:48.128 "aliases": [ 00:07:48.128 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:48.128 ], 00:07:48.128 "product_name": "GPT Disk", 00:07:48.128 "block_size": 4096, 00:07:48.128 "num_blocks": 655104, 00:07:48.128 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:48.128 "assigned_rate_limits": { 00:07:48.128 "rw_ios_per_sec": 0, 00:07:48.128 "rw_mbytes_per_sec": 0, 00:07:48.128 "r_mbytes_per_sec": 0, 00:07:48.128 "w_mbytes_per_sec": 0 00:07:48.128 }, 00:07:48.128 "claimed": false, 00:07:48.128 "zoned": false, 00:07:48.128 "supported_io_types": { 00:07:48.128 "read": true, 00:07:48.128 "write": true, 00:07:48.128 "unmap": true, 00:07:48.128 "flush": true, 00:07:48.128 "reset": true, 00:07:48.128 "nvme_admin": false, 00:07:48.128 "nvme_io": false, 00:07:48.128 "nvme_io_md": false, 00:07:48.128 "write_zeroes": true, 00:07:48.128 "zcopy": false, 00:07:48.128 "get_zone_info": false, 00:07:48.128 "zone_management": false, 00:07:48.128 "zone_append": false, 00:07:48.128 "compare": true, 00:07:48.128 "compare_and_write": false, 00:07:48.128 "abort": true, 00:07:48.128 "seek_hole": false, 00:07:48.128 "seek_data": false, 00:07:48.128 "copy": true, 00:07:48.128 "nvme_iov_md": false 00:07:48.128 }, 00:07:48.128 "driver_specific": { 00:07:48.128 "gpt": { 00:07:48.128 "base_bdev": "Nvme1n1", 00:07:48.129 "offset_blocks": 256, 00:07:48.129 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:48.129 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:48.129 "partition_name": "SPDK_TEST_first" 00:07:48.129 } 00:07:48.129 } 00:07:48.129 } 00:07:48.129 ]' 00:07:48.129 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:48.129 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:48.129 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:48.129 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:48.129 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:48.129 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:48.129 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:48.129 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.129 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:48.129 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.129 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:48.129 { 00:07:48.129 "name": "Nvme1n1p2", 00:07:48.129 "aliases": [ 00:07:48.129 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:48.129 ], 00:07:48.129 "product_name": "GPT Disk", 00:07:48.129 "block_size": 4096, 00:07:48.129 "num_blocks": 655103, 00:07:48.129 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:48.129 "assigned_rate_limits": { 00:07:48.129 "rw_ios_per_sec": 0, 00:07:48.129 "rw_mbytes_per_sec": 0, 00:07:48.129 "r_mbytes_per_sec": 0, 00:07:48.129 "w_mbytes_per_sec": 0 00:07:48.129 }, 00:07:48.129 "claimed": false, 00:07:48.129 "zoned": false, 00:07:48.129 "supported_io_types": { 00:07:48.129 "read": true, 00:07:48.129 "write": true, 00:07:48.129 "unmap": true, 00:07:48.129 "flush": true, 00:07:48.129 "reset": true, 00:07:48.129 "nvme_admin": false, 00:07:48.129 "nvme_io": false, 00:07:48.129 "nvme_io_md": false, 00:07:48.129 "write_zeroes": true, 00:07:48.129 "zcopy": false, 00:07:48.129 "get_zone_info": false, 00:07:48.129 "zone_management": false, 00:07:48.129 "zone_append": false, 00:07:48.129 "compare": true, 00:07:48.129 "compare_and_write": false, 00:07:48.129 "abort": true, 00:07:48.129 "seek_hole": false, 00:07:48.129 "seek_data": false, 00:07:48.129 "copy": true, 00:07:48.129 "nvme_iov_md": false 00:07:48.129 }, 00:07:48.129 "driver_specific": { 00:07:48.129 "gpt": { 00:07:48.129 "base_bdev": "Nvme1n1", 00:07:48.129 "offset_blocks": 655360, 00:07:48.129 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:48.129 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:48.129 "partition_name": "SPDK_TEST_second" 00:07:48.129 } 00:07:48.129 } 00:07:48.129 } 00:07:48.129 ]' 00:07:48.129 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:48.129 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:48.129 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:48.129 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:48.129 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:48.129 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:48.129 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62150 00:07:48.129 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62150 ']' 00:07:48.129 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62150 00:07:48.129 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:48.129 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.129 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62150 00:07:48.129 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.129 killing process with pid 62150 00:07:48.129 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.129 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62150' 00:07:48.129 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62150 00:07:48.129 16:34:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62150 00:07:50.077 00:07:50.077 real 0m3.345s 00:07:50.077 user 0m3.459s 00:07:50.077 sys 0m0.489s 00:07:50.077 16:34:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.077 16:34:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:50.077 ************************************ 00:07:50.077 END TEST bdev_gpt_uuid 00:07:50.077 ************************************ 00:07:50.077 16:34:34 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:50.077 16:34:34 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:50.077 16:34:34 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:50.077 16:34:34 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:50.077 16:34:34 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:50.077 16:34:34 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:50.077 16:34:34 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:50.077 16:34:34 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:50.077 16:34:34 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:50.339 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:50.339 Waiting for block devices as requested 00:07:50.602 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:50.602 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:50.602 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:50.863 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:56.165 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:56.165 16:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:56.165 16:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:56.466 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:56.466 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:56.466 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:56.466 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:56.466 16:34:41 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:56.466 00:07:56.466 real 0m57.909s 00:07:56.466 user 1m14.406s 00:07:56.466 sys 0m7.939s 00:07:56.466 16:34:41 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.466 ************************************ 00:07:56.466 END TEST blockdev_nvme_gpt 00:07:56.466 ************************************ 00:07:56.466 16:34:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.466 16:34:41 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:56.466 16:34:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.466 16:34:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.466 16:34:41 -- common/autotest_common.sh@10 -- # set +x 00:07:56.466 ************************************ 00:07:56.466 START TEST nvme 00:07:56.466 ************************************ 00:07:56.466 16:34:41 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:56.466 * Looking for test storage... 00:07:56.466 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:56.466 16:34:41 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:56.466 16:34:41 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:56.466 16:34:41 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:56.466 16:34:41 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:56.466 16:34:41 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:56.466 16:34:41 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:56.466 16:34:41 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:56.466 16:34:41 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.466 16:34:41 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:56.466 16:34:41 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:56.466 16:34:41 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:56.466 16:34:41 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:56.466 16:34:41 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:56.466 16:34:41 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:56.466 16:34:41 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:56.466 16:34:41 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:56.466 16:34:41 nvme -- scripts/common.sh@345 -- # : 1 00:07:56.466 16:34:41 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:56.466 16:34:41 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.466 16:34:41 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:56.466 16:34:41 nvme -- scripts/common.sh@353 -- # local d=1 00:07:56.466 16:34:41 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.466 16:34:41 nvme -- scripts/common.sh@355 -- # echo 1 00:07:56.466 16:34:41 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:56.466 16:34:41 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:56.466 16:34:41 nvme -- scripts/common.sh@353 -- # local d=2 00:07:56.466 16:34:41 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.466 16:34:41 nvme -- scripts/common.sh@355 -- # echo 2 00:07:56.466 16:34:41 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:56.466 16:34:41 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:56.466 16:34:41 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:56.466 16:34:41 nvme -- scripts/common.sh@368 -- # return 0 00:07:56.466 16:34:41 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.467 16:34:41 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:56.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.467 --rc genhtml_branch_coverage=1 00:07:56.467 --rc genhtml_function_coverage=1 00:07:56.467 --rc genhtml_legend=1 00:07:56.467 --rc geninfo_all_blocks=1 00:07:56.467 --rc geninfo_unexecuted_blocks=1 00:07:56.467 00:07:56.467 ' 00:07:56.467 16:34:41 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:56.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.467 --rc genhtml_branch_coverage=1 00:07:56.467 --rc genhtml_function_coverage=1 00:07:56.467 --rc genhtml_legend=1 00:07:56.467 --rc geninfo_all_blocks=1 00:07:56.467 --rc geninfo_unexecuted_blocks=1 00:07:56.467 00:07:56.467 ' 00:07:56.467 16:34:41 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:56.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.467 --rc genhtml_branch_coverage=1 00:07:56.467 --rc genhtml_function_coverage=1 00:07:56.467 --rc genhtml_legend=1 00:07:56.467 --rc geninfo_all_blocks=1 00:07:56.467 --rc geninfo_unexecuted_blocks=1 00:07:56.467 00:07:56.467 ' 00:07:56.467 16:34:41 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:56.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.467 --rc genhtml_branch_coverage=1 00:07:56.467 --rc genhtml_function_coverage=1 00:07:56.467 --rc genhtml_legend=1 00:07:56.467 --rc geninfo_all_blocks=1 00:07:56.467 --rc geninfo_unexecuted_blocks=1 00:07:56.467 00:07:56.467 ' 00:07:56.467 16:34:41 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:57.039 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:57.612 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:57.612 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:57.612 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:57.612 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:57.612 16:34:42 nvme -- nvme/nvme.sh@79 -- # uname 00:07:57.612 16:34:42 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:57.612 16:34:42 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:57.612 16:34:42 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:57.612 16:34:42 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:57.612 16:34:42 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:57.612 16:34:42 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:57.612 Waiting for stub to ready for secondary processes... 00:07:57.612 16:34:42 nvme -- common/autotest_common.sh@1075 -- # stubpid=62796 00:07:57.612 16:34:42 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:57.612 16:34:42 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:57.612 16:34:42 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62796 ]] 00:07:57.612 16:34:42 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:57.612 16:34:42 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:57.872 [2024-11-20 16:34:42.510355] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:07:57.872 [2024-11-20 16:34:42.510527] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:58.812 16:34:43 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:58.812 16:34:43 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62796 ]] 00:07:58.812 16:34:43 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:59.074 [2024-11-20 16:34:43.726709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:59.074 [2024-11-20 16:34:43.856406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.074 [2024-11-20 16:34:43.857118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:59.074 [2024-11-20 16:34:43.857263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.074 [2024-11-20 16:34:43.876173] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:59.074 [2024-11-20 16:34:43.876223] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:59.074 [2024-11-20 16:34:43.884834] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:59.074 [2024-11-20 16:34:43.884960] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:59.074 [2024-11-20 16:34:43.887340] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:59.074 [2024-11-20 16:34:43.887604] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:59.074 [2024-11-20 16:34:43.887685] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:59.074 [2024-11-20 16:34:43.890042] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:59.074 [2024-11-20 16:34:43.890257] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:59.074 [2024-11-20 16:34:43.890337] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:59.074 [2024-11-20 16:34:43.893664] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:59.074 [2024-11-20 16:34:43.893934] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:59.074 [2024-11-20 16:34:43.894067] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:59.074 [2024-11-20 16:34:43.894130] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:59.074 [2024-11-20 16:34:43.894193] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:59.646 done. 00:07:59.646 16:34:44 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:59.646 16:34:44 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:59.646 16:34:44 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:59.646 16:34:44 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:59.646 16:34:44 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.646 16:34:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:59.646 ************************************ 00:07:59.646 START TEST nvme_reset 00:07:59.646 ************************************ 00:07:59.646 16:34:44 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:59.906 Initializing NVMe Controllers 00:07:59.906 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:59.906 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:59.906 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:59.906 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:59.906 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:59.906 00:07:59.906 real 0m0.245s 00:07:59.906 user 0m0.086s 00:07:59.906 sys 0m0.107s 00:07:59.906 16:34:44 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.906 ************************************ 00:07:59.906 16:34:44 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:59.906 END TEST nvme_reset 00:07:59.906 ************************************ 00:07:59.906 16:34:44 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:59.906 16:34:44 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.906 16:34:44 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.906 16:34:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:00.167 ************************************ 00:08:00.167 START TEST nvme_identify 00:08:00.167 ************************************ 00:08:00.167 16:34:44 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:08:00.167 16:34:44 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:00.167 16:34:44 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:00.167 16:34:44 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:00.167 16:34:44 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:00.167 16:34:44 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:00.167 16:34:44 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:08:00.167 16:34:44 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:00.167 16:34:44 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:00.167 16:34:44 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:00.167 16:34:44 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:00.167 16:34:44 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:00.167 16:34:44 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:00.167 [2024-11-20 16:34:45.049616] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62829 terminated unexpected 00:08:00.432 ===================================================== 00:08:00.432 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:00.432 ===================================================== 00:08:00.433 Controller Capabilities/Features 00:08:00.433 ================================ 00:08:00.433 Vendor ID: 1b36 00:08:00.433 Subsystem Vendor ID: 1af4 00:08:00.433 Serial Number: 12340 00:08:00.433 Model Number: QEMU NVMe Ctrl 00:08:00.433 Firmware Version: 8.0.0 00:08:00.433 Recommended Arb Burst: 6 00:08:00.433 IEEE OUI Identifier: 00 54 52 00:08:00.433 Multi-path I/O 00:08:00.433 May have multiple subsystem ports: No 00:08:00.433 May have multiple controllers: No 00:08:00.433 Associated with SR-IOV VF: No 00:08:00.433 Max Data Transfer Size: 524288 00:08:00.433 Max Number of Namespaces: 256 00:08:00.433 Max Number of I/O Queues: 64 00:08:00.433 NVMe Specification Version (VS): 1.4 00:08:00.433 NVMe Specification Version (Identify): 1.4 00:08:00.433 Maximum Queue Entries: 2048 00:08:00.433 Contiguous Queues Required: Yes 00:08:00.433 Arbitration Mechanisms Supported 00:08:00.433 Weighted Round Robin: Not Supported 00:08:00.433 Vendor Specific: Not Supported 00:08:00.433 Reset Timeout: 7500 ms 00:08:00.433 Doorbell Stride: 4 bytes 00:08:00.433 NVM Subsystem Reset: Not Supported 00:08:00.433 Command Sets Supported 00:08:00.433 NVM Command Set: Supported 00:08:00.433 Boot Partition: Not Supported 00:08:00.433 Memory Page Size Minimum: 4096 bytes 00:08:00.433 Memory Page Size Maximum: 65536 bytes 00:08:00.433 Persistent Memory Region: Not Supported 00:08:00.433 Optional Asynchronous Events Supported 00:08:00.433 Namespace Attribute Notices: Supported 00:08:00.433 Firmware Activation Notices: Not Supported 00:08:00.433 ANA Change Notices: Not Supported 00:08:00.433 PLE Aggregate Log Change Notices: Not Supported 00:08:00.433 LBA Status Info Alert Notices: Not Supported 00:08:00.433 EGE Aggregate Log Change Notices: Not Supported 00:08:00.433 Normal NVM Subsystem Shutdown event: Not Supported 00:08:00.433 Zone Descriptor Change Notices: Not Supported 00:08:00.433 Discovery Log Change Notices: Not Supported 00:08:00.433 Controller Attributes 00:08:00.433 128-bit Host Identifier: Not Supported 00:08:00.433 Non-Operational Permissive Mode: Not Supported 00:08:00.433 NVM Sets: Not Supported 00:08:00.433 Read Recovery Levels: Not Supported 00:08:00.433 Endurance Groups: Not Supported 00:08:00.433 Predictable Latency Mode: Not Supported 00:08:00.433 Traffic Based Keep ALive: Not Supported 00:08:00.433 Namespace Granularity: Not Supported 00:08:00.433 SQ Associations: Not Supported 00:08:00.433 UUID List: Not Supported 00:08:00.433 Multi-Domain Subsystem: Not Supported 00:08:00.433 Fixed Capacity Management: Not Supported 00:08:00.433 Variable Capacity Management: Not Supported 00:08:00.433 Delete Endurance Group: Not Supported 00:08:00.433 Delete NVM Set: Not Supported 00:08:00.433 Extended LBA Formats Supported: Supported 00:08:00.433 Flexible Data Placement Supported: Not Supported 00:08:00.433 00:08:00.433 Controller Memory Buffer Support 00:08:00.433 ================================ 00:08:00.433 Supported: No 00:08:00.433 00:08:00.433 Persistent Memory Region Support 00:08:00.433 ================================ 00:08:00.433 Supported: No 00:08:00.433 00:08:00.433 Admin Command Set Attributes 00:08:00.433 ============================ 00:08:00.433 Security Send/Receive: Not Supported 00:08:00.433 Format NVM: Supported 00:08:00.433 Firmware Activate/Download: Not Supported 00:08:00.433 Namespace Management: Supported 00:08:00.433 Device Self-Test: Not Supported 00:08:00.433 Directives: Supported 00:08:00.433 NVMe-MI: Not Supported 00:08:00.433 Virtualization Management: Not Supported 00:08:00.433 Doorbell Buffer Config: Supported 00:08:00.433 Get LBA Status Capability: Not Supported 00:08:00.433 Command & Feature Lockdown Capability: Not Supported 00:08:00.433 Abort Command Limit: 4 00:08:00.433 Async Event Request Limit: 4 00:08:00.433 Number of Firmware Slots: N/A 00:08:00.433 Firmware Slot 1 Read-Only: N/A 00:08:00.433 Firmware Activation Without Reset: N/A 00:08:00.433 Multiple Update Detection Support: N/A 00:08:00.433 Firmware Update Granularity: No Information Provided 00:08:00.433 Per-Namespace SMART Log: Yes 00:08:00.433 Asymmetric Namespace Access Log Page: Not Supported 00:08:00.433 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:00.433 Command Effects Log Page: Supported 00:08:00.433 Get Log Page Extended Data: Supported 00:08:00.433 Telemetry Log Pages: Not Supported 00:08:00.433 Persistent Event Log Pages: Not Supported 00:08:00.433 Supported Log Pages Log Page: May Support 00:08:00.433 Commands Supported & Effects Log Page: Not Supported 00:08:00.433 Feature Identifiers & Effects Log Page:May Support 00:08:00.433 NVMe-MI Commands & Effects Log Page: May Support 00:08:00.433 Data Area 4 for Telemetry Log: Not Supported 00:08:00.433 Error Log Page Entries Supported: 1 00:08:00.433 Keep Alive: Not Supported 00:08:00.433 00:08:00.433 NVM Command Set Attributes 00:08:00.433 ========================== 00:08:00.433 Submission Queue Entry Size 00:08:00.433 Max: 64 00:08:00.433 Min: 64 00:08:00.433 Completion Queue Entry Size 00:08:00.433 Max: 16 00:08:00.433 Min: 16 00:08:00.433 Number of Namespaces: 256 00:08:00.433 Compare Command: Supported 00:08:00.433 Write Uncorrectable Command: Not Supported 00:08:00.433 Dataset Management Command: Supported 00:08:00.433 Write Zeroes Command: Supported 00:08:00.433 Set Features Save Field: Supported 00:08:00.433 Reservations: Not Supported 00:08:00.433 Timestamp: Supported 00:08:00.433 Copy: Supported 00:08:00.433 Volatile Write Cache: Present 00:08:00.433 Atomic Write Unit (Normal): 1 00:08:00.433 Atomic Write Unit (PFail): 1 00:08:00.433 Atomic Compare & Write Unit: 1 00:08:00.433 Fused Compare & Write: Not Supported 00:08:00.433 Scatter-Gather List 00:08:00.433 SGL Command Set: Supported 00:08:00.433 SGL Keyed: Not Supported 00:08:00.433 SGL Bit Bucket Descriptor: Not Supported 00:08:00.433 SGL Metadata Pointer: Not Supported 00:08:00.433 Oversized SGL: Not Supported 00:08:00.433 SGL Metadata Address: Not Supported 00:08:00.433 SGL Offset: Not Supported 00:08:00.433 Transport SGL Data Block: Not Supported 00:08:00.433 Replay Protected Memory Block: Not Supported 00:08:00.433 00:08:00.433 Firmware Slot Information 00:08:00.433 ========================= 00:08:00.433 Active slot: 1 00:08:00.433 Slot 1 Firmware Revision: 1.0 00:08:00.433 00:08:00.433 00:08:00.433 Commands Supported and Effects 00:08:00.433 ============================== 00:08:00.433 Admin Commands 00:08:00.433 -------------- 00:08:00.433 Delete I/O Submission Queue (00h): Supported 00:08:00.433 Create I/O Submission Queue (01h): Supported 00:08:00.433 Get Log Page (02h): Supported 00:08:00.433 Delete I/O Completion Queue (04h): Supported 00:08:00.433 Create I/O Completion Queue (05h): Supported 00:08:00.433 Identify (06h): Supported 00:08:00.433 Abort (08h): Supported 00:08:00.433 Set Features (09h): Supported 00:08:00.433 Get Features (0Ah): Supported 00:08:00.433 Asynchronous Event Request (0Ch): Supported 00:08:00.433 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:00.433 Directive Send (19h): Supported 00:08:00.433 Directive Receive (1Ah): Supported 00:08:00.433 Virtualization Management (1Ch): Supported 00:08:00.433 Doorbell Buffer Config (7Ch): Supported 00:08:00.433 Format NVM (80h): Supported LBA-Change 00:08:00.433 I/O Commands 00:08:00.433 ------------ 00:08:00.433 Flush (00h): Supported LBA-Change 00:08:00.433 Write (01h): Supported LBA-Change 00:08:00.433 Read (02h): Supported 00:08:00.433 Compare (05h): Supported 00:08:00.433 Write Zeroes (08h): Supported LBA-Change 00:08:00.433 Dataset Management (09h): Supported LBA-Change 00:08:00.433 Unknown (0Ch): Supported 00:08:00.433 Unknown (12h): Supported 00:08:00.433 Copy (19h): Supported LBA-Change 00:08:00.433 Unknown (1Dh): Supported LBA-Change 00:08:00.433 00:08:00.433 Error Log 00:08:00.433 ========= 00:08:00.433 00:08:00.433 Arbitration 00:08:00.433 =========== 00:08:00.433 Arbitration Burst: no limit 00:08:00.433 00:08:00.433 Power Management 00:08:00.433 ================ 00:08:00.433 Number of Power States: 1 00:08:00.433 Current Power State: Power State #0 00:08:00.433 Power State #0: 00:08:00.433 Max Power: 25.00 W 00:08:00.433 Non-Operational State: Operational 00:08:00.433 Entry Latency: 16 microseconds 00:08:00.433 Exit Latency: 4 microseconds 00:08:00.433 Relative Read Throughput: 0 00:08:00.433 Relative Read Latency: 0 00:08:00.433 Relative Write Throughput: 0 00:08:00.434 Relative Write Latency: 0 00:08:00.434 Idle Power[2024-11-20 16:34:45.051652] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62829 terminated unexpected 00:08:00.434 : Not Reported 00:08:00.434 Active Power: Not Reported 00:08:00.434 Non-Operational Permissive Mode: Not Supported 00:08:00.434 00:08:00.434 Health Information 00:08:00.434 ================== 00:08:00.434 Critical Warnings: 00:08:00.434 Available Spare Space: OK 00:08:00.434 Temperature: OK 00:08:00.434 Device Reliability: OK 00:08:00.434 Read Only: No 00:08:00.434 Volatile Memory Backup: OK 00:08:00.434 Current Temperature: 323 Kelvin (50 Celsius) 00:08:00.434 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:00.434 Available Spare: 0% 00:08:00.434 Available Spare Threshold: 0% 00:08:00.434 Life Percentage Used: 0% 00:08:00.434 Data Units Read: 648 00:08:00.434 Data Units Written: 576 00:08:00.434 Host Read Commands: 36953 00:08:00.434 Host Write Commands: 36739 00:08:00.434 Controller Busy Time: 0 minutes 00:08:00.434 Power Cycles: 0 00:08:00.434 Power On Hours: 0 hours 00:08:00.434 Unsafe Shutdowns: 0 00:08:00.434 Unrecoverable Media Errors: 0 00:08:00.434 Lifetime Error Log Entries: 0 00:08:00.434 Warning Temperature Time: 0 minutes 00:08:00.434 Critical Temperature Time: 0 minutes 00:08:00.434 00:08:00.434 Number of Queues 00:08:00.434 ================ 00:08:00.434 Number of I/O Submission Queues: 64 00:08:00.434 Number of I/O Completion Queues: 64 00:08:00.434 00:08:00.434 ZNS Specific Controller Data 00:08:00.434 ============================ 00:08:00.434 Zone Append Size Limit: 0 00:08:00.434 00:08:00.434 00:08:00.434 Active Namespaces 00:08:00.434 ================= 00:08:00.434 Namespace ID:1 00:08:00.434 Error Recovery Timeout: Unlimited 00:08:00.434 Command Set Identifier: NVM (00h) 00:08:00.434 Deallocate: Supported 00:08:00.434 Deallocated/Unwritten Error: Supported 00:08:00.434 Deallocated Read Value: All 0x00 00:08:00.434 Deallocate in Write Zeroes: Not Supported 00:08:00.434 Deallocated Guard Field: 0xFFFF 00:08:00.434 Flush: Supported 00:08:00.434 Reservation: Not Supported 00:08:00.434 Metadata Transferred as: Separate Metadata Buffer 00:08:00.434 Namespace Sharing Capabilities: Private 00:08:00.434 Size (in LBAs): 1548666 (5GiB) 00:08:00.434 Capacity (in LBAs): 1548666 (5GiB) 00:08:00.434 Utilization (in LBAs): 1548666 (5GiB) 00:08:00.434 Thin Provisioning: Not Supported 00:08:00.434 Per-NS Atomic Units: No 00:08:00.434 Maximum Single Source Range Length: 128 00:08:00.434 Maximum Copy Length: 128 00:08:00.434 Maximum Source Range Count: 128 00:08:00.434 NGUID/EUI64 Never Reused: No 00:08:00.434 Namespace Write Protected: No 00:08:00.434 Number of LBA Formats: 8 00:08:00.434 Current LBA Format: LBA Format #07 00:08:00.434 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:00.434 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:00.434 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:00.434 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:00.434 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:00.434 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:00.434 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:00.434 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:00.434 00:08:00.434 NVM Specific Namespace Data 00:08:00.434 =========================== 00:08:00.434 Logical Block Storage Tag Mask: 0 00:08:00.434 Protection Information Capabilities: 00:08:00.434 16b Guard Protection Information Storage Tag Support: No 00:08:00.434 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:00.434 Storage Tag Check Read Support: No 00:08:00.434 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.434 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.434 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.434 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.434 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.434 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.434 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.434 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.434 ===================================================== 00:08:00.434 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:00.434 ===================================================== 00:08:00.434 Controller Capabilities/Features 00:08:00.434 ================================ 00:08:00.434 Vendor ID: 1b36 00:08:00.434 Subsystem Vendor ID: 1af4 00:08:00.434 Serial Number: 12341 00:08:00.434 Model Number: QEMU NVMe Ctrl 00:08:00.434 Firmware Version: 8.0.0 00:08:00.434 Recommended Arb Burst: 6 00:08:00.434 IEEE OUI Identifier: 00 54 52 00:08:00.434 Multi-path I/O 00:08:00.434 May have multiple subsystem ports: No 00:08:00.434 May have multiple controllers: No 00:08:00.434 Associated with SR-IOV VF: No 00:08:00.434 Max Data Transfer Size: 524288 00:08:00.434 Max Number of Namespaces: 256 00:08:00.434 Max Number of I/O Queues: 64 00:08:00.434 NVMe Specification Version (VS): 1.4 00:08:00.434 NVMe Specification Version (Identify): 1.4 00:08:00.434 Maximum Queue Entries: 2048 00:08:00.434 Contiguous Queues Required: Yes 00:08:00.434 Arbitration Mechanisms Supported 00:08:00.434 Weighted Round Robin: Not Supported 00:08:00.434 Vendor Specific: Not Supported 00:08:00.434 Reset Timeout: 7500 ms 00:08:00.434 Doorbell Stride: 4 bytes 00:08:00.434 NVM Subsystem Reset: Not Supported 00:08:00.434 Command Sets Supported 00:08:00.434 NVM Command Set: Supported 00:08:00.434 Boot Partition: Not Supported 00:08:00.434 Memory Page Size Minimum: 4096 bytes 00:08:00.434 Memory Page Size Maximum: 65536 bytes 00:08:00.434 Persistent Memory Region: Not Supported 00:08:00.434 Optional Asynchronous Events Supported 00:08:00.434 Namespace Attribute Notices: Supported 00:08:00.434 Firmware Activation Notices: Not Supported 00:08:00.434 ANA Change Notices: Not Supported 00:08:00.434 PLE Aggregate Log Change Notices: Not Supported 00:08:00.434 LBA Status Info Alert Notices: Not Supported 00:08:00.434 EGE Aggregate Log Change Notices: Not Supported 00:08:00.434 Normal NVM Subsystem Shutdown event: Not Supported 00:08:00.434 Zone Descriptor Change Notices: Not Supported 00:08:00.434 Discovery Log Change Notices: Not Supported 00:08:00.434 Controller Attributes 00:08:00.434 128-bit Host Identifier: Not Supported 00:08:00.434 Non-Operational Permissive Mode: Not Supported 00:08:00.434 NVM Sets: Not Supported 00:08:00.434 Read Recovery Levels: Not Supported 00:08:00.434 Endurance Groups: Not Supported 00:08:00.434 Predictable Latency Mode: Not Supported 00:08:00.434 Traffic Based Keep ALive: Not Supported 00:08:00.434 Namespace Granularity: Not Supported 00:08:00.434 SQ Associations: Not Supported 00:08:00.434 UUID List: Not Supported 00:08:00.434 Multi-Domain Subsystem: Not Supported 00:08:00.434 Fixed Capacity Management: Not Supported 00:08:00.434 Variable Capacity Management: Not Supported 00:08:00.434 Delete Endurance Group: Not Supported 00:08:00.434 Delete NVM Set: Not Supported 00:08:00.434 Extended LBA Formats Supported: Supported 00:08:00.434 Flexible Data Placement Supported: Not Supported 00:08:00.434 00:08:00.434 Controller Memory Buffer Support 00:08:00.434 ================================ 00:08:00.434 Supported: No 00:08:00.434 00:08:00.434 Persistent Memory Region Support 00:08:00.434 ================================ 00:08:00.434 Supported: No 00:08:00.434 00:08:00.434 Admin Command Set Attributes 00:08:00.434 ============================ 00:08:00.434 Security Send/Receive: Not Supported 00:08:00.434 Format NVM: Supported 00:08:00.434 Firmware Activate/Download: Not Supported 00:08:00.434 Namespace Management: Supported 00:08:00.434 Device Self-Test: Not Supported 00:08:00.434 Directives: Supported 00:08:00.435 NVMe-MI: Not Supported 00:08:00.435 Virtualization Management: Not Supported 00:08:00.435 Doorbell Buffer Config: Supported 00:08:00.435 Get LBA Status Capability: Not Supported 00:08:00.435 Command & Feature Lockdown Capability: Not Supported 00:08:00.435 Abort Command Limit: 4 00:08:00.435 Async Event Request Limit: 4 00:08:00.435 Number of Firmware Slots: N/A 00:08:00.435 Firmware Slot 1 Read-Only: N/A 00:08:00.435 Firmware Activation Without Reset: N/A 00:08:00.435 Multiple Update Detection Support: N/A 00:08:00.435 Firmware Update Granularity: No Information Provided 00:08:00.435 Per-Namespace SMART Log: Yes 00:08:00.435 Asymmetric Namespace Access Log Page: Not Supported 00:08:00.435 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:00.435 Command Effects Log Page: Supported 00:08:00.435 Get Log Page Extended Data: Supported 00:08:00.435 Telemetry Log Pages: Not Supported 00:08:00.435 Persistent Event Log Pages: Not Supported 00:08:00.435 Supported Log Pages Log Page: May Support 00:08:00.435 Commands Supported & Effects Log Page: Not Supported 00:08:00.435 Feature Identifiers & Effects Log Page:May Support 00:08:00.435 NVMe-MI Commands & Effects Log Page: May Support 00:08:00.435 Data Area 4 for Telemetry Log: Not Supported 00:08:00.435 Error Log Page Entries Supported: 1 00:08:00.435 Keep Alive: Not Supported 00:08:00.435 00:08:00.435 NVM Command Set Attributes 00:08:00.435 ========================== 00:08:00.435 Submission Queue Entry Size 00:08:00.435 Max: 64 00:08:00.435 Min: 64 00:08:00.435 Completion Queue Entry Size 00:08:00.435 Max: 16 00:08:00.435 Min: 16 00:08:00.435 Number of Namespaces: 256 00:08:00.435 Compare Command: Supported 00:08:00.435 Write Uncorrectable Command: Not Supported 00:08:00.435 Dataset Management Command: Supported 00:08:00.435 Write Zeroes Command: Supported 00:08:00.435 Set Features Save Field: Supported 00:08:00.435 Reservations: Not Supported 00:08:00.435 Timestamp: Supported 00:08:00.435 Copy: Supported 00:08:00.435 Volatile Write Cache: Present 00:08:00.435 Atomic Write Unit (Normal): 1 00:08:00.435 Atomic Write Unit (PFail): 1 00:08:00.435 Atomic Compare & Write Unit: 1 00:08:00.435 Fused Compare & Write: Not Supported 00:08:00.435 Scatter-Gather List 00:08:00.435 SGL Command Set: Supported 00:08:00.435 SGL Keyed: Not Supported 00:08:00.435 SGL Bit Bucket Descriptor: Not Supported 00:08:00.435 SGL Metadata Pointer: Not Supported 00:08:00.435 Oversized SGL: Not Supported 00:08:00.435 SGL Metadata Address: Not Supported 00:08:00.435 SGL Offset: Not Supported 00:08:00.435 Transport SGL Data Block: Not Supported 00:08:00.435 Replay Protected Memory Block: Not Supported 00:08:00.435 00:08:00.435 Firmware Slot Information 00:08:00.435 ========================= 00:08:00.435 Active slot: 1 00:08:00.435 Slot 1 Firmware Revision: 1.0 00:08:00.435 00:08:00.435 00:08:00.435 Commands Supported and Effects 00:08:00.435 ============================== 00:08:00.435 Admin Commands 00:08:00.435 -------------- 00:08:00.435 Delete I/O Submission Queue (00h): Supported 00:08:00.435 Create I/O Submission Queue (01h): Supported 00:08:00.435 Get Log Page (02h): Supported 00:08:00.435 Delete I/O Completion Queue (04h): Supported 00:08:00.435 Create I/O Completion Queue (05h): Supported 00:08:00.435 Identify (06h): Supported 00:08:00.435 Abort (08h): Supported 00:08:00.435 Set Features (09h): Supported 00:08:00.435 Get Features (0Ah): Supported 00:08:00.435 Asynchronous Event Request (0Ch): Supported 00:08:00.435 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:00.435 Directive Send (19h): Supported 00:08:00.435 Directive Receive (1Ah): Supported 00:08:00.435 Virtualization Management (1Ch): Supported 00:08:00.435 Doorbell Buffer Config (7Ch): Supported 00:08:00.435 Format NVM (80h): Supported LBA-Change 00:08:00.435 I/O Commands 00:08:00.435 ------------ 00:08:00.435 Flush (00h): Supported LBA-Change 00:08:00.435 Write (01h): Supported LBA-Change 00:08:00.435 Read (02h): Supported 00:08:00.435 Compare (05h): Supported 00:08:00.435 Write Zeroes (08h): Supported LBA-Change 00:08:00.435 Dataset Management (09h): Supported LBA-Change 00:08:00.435 Unknown (0Ch): Supported 00:08:00.435 Unknown (12h): Supported 00:08:00.435 Copy (19h): Supported LBA-Change 00:08:00.435 Unknown (1Dh): Supported LBA-Change 00:08:00.435 00:08:00.435 Error Log 00:08:00.435 ========= 00:08:00.435 00:08:00.435 Arbitration 00:08:00.435 =========== 00:08:00.435 Arbitration Burst: no limit 00:08:00.435 00:08:00.435 Power Management 00:08:00.435 ================ 00:08:00.435 Number of Power States: 1 00:08:00.435 Current Power State: Power State #0 00:08:00.435 Power State #0: 00:08:00.435 Max Power: 25.00 W 00:08:00.435 Non-Operational State: Operational 00:08:00.435 Entry Latency: 16 microseconds 00:08:00.435 Exit Latency: 4 microseconds 00:08:00.435 Relative Read Throughput: 0 00:08:00.435 Relative Read Latency: 0 00:08:00.435 Relative Write Throughput: 0 00:08:00.435 Relative Write Latency: 0 00:08:00.435 Idle Power: Not Reported 00:08:00.435 Active Power: Not Reported 00:08:00.435 Non-Operational Permissive Mode: Not Supported 00:08:00.435 00:08:00.435 Health Information 00:08:00.435 ================== 00:08:00.435 Critical Warnings: 00:08:00.435 Available Spare Space: OK 00:08:00.435 Temperature: [2024-11-20 16:34:45.053240] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62829 terminated unexpected 00:08:00.435 OK 00:08:00.435 Device Reliability: OK 00:08:00.435 Read Only: No 00:08:00.435 Volatile Memory Backup: OK 00:08:00.435 Current Temperature: 323 Kelvin (50 Celsius) 00:08:00.435 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:00.435 Available Spare: 0% 00:08:00.435 Available Spare Threshold: 0% 00:08:00.435 Life Percentage Used: 0% 00:08:00.435 Data Units Read: 1037 00:08:00.435 Data Units Written: 897 00:08:00.435 Host Read Commands: 54610 00:08:00.435 Host Write Commands: 53299 00:08:00.435 Controller Busy Time: 0 minutes 00:08:00.435 Power Cycles: 0 00:08:00.435 Power On Hours: 0 hours 00:08:00.435 Unsafe Shutdowns: 0 00:08:00.435 Unrecoverable Media Errors: 0 00:08:00.435 Lifetime Error Log Entries: 0 00:08:00.435 Warning Temperature Time: 0 minutes 00:08:00.435 Critical Temperature Time: 0 minutes 00:08:00.435 00:08:00.435 Number of Queues 00:08:00.435 ================ 00:08:00.435 Number of I/O Submission Queues: 64 00:08:00.435 Number of I/O Completion Queues: 64 00:08:00.435 00:08:00.435 ZNS Specific Controller Data 00:08:00.435 ============================ 00:08:00.435 Zone Append Size Limit: 0 00:08:00.435 00:08:00.435 00:08:00.435 Active Namespaces 00:08:00.435 ================= 00:08:00.435 Namespace ID:1 00:08:00.435 Error Recovery Timeout: Unlimited 00:08:00.435 Command Set Identifier: NVM (00h) 00:08:00.435 Deallocate: Supported 00:08:00.435 Deallocated/Unwritten Error: Supported 00:08:00.435 Deallocated Read Value: All 0x00 00:08:00.435 Deallocate in Write Zeroes: Not Supported 00:08:00.435 Deallocated Guard Field: 0xFFFF 00:08:00.435 Flush: Supported 00:08:00.435 Reservation: Not Supported 00:08:00.435 Namespace Sharing Capabilities: Private 00:08:00.435 Size (in LBAs): 1310720 (5GiB) 00:08:00.435 Capacity (in LBAs): 1310720 (5GiB) 00:08:00.435 Utilization (in LBAs): 1310720 (5GiB) 00:08:00.435 Thin Provisioning: Not Supported 00:08:00.435 Per-NS Atomic Units: No 00:08:00.435 Maximum Single Source Range Length: 128 00:08:00.435 Maximum Copy Length: 128 00:08:00.435 Maximum Source Range Count: 128 00:08:00.435 NGUID/EUI64 Never Reused: No 00:08:00.435 Namespace Write Protected: No 00:08:00.435 Number of LBA Formats: 8 00:08:00.435 Current LBA Format: LBA Format #04 00:08:00.435 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:00.435 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:00.435 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:00.435 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:00.435 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:00.435 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:00.435 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:00.435 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:00.435 00:08:00.435 NVM Specific Namespace Data 00:08:00.435 =========================== 00:08:00.435 Logical Block Storage Tag Mask: 0 00:08:00.435 Protection Information Capabilities: 00:08:00.435 16b Guard Protection Information Storage Tag Support: No 00:08:00.435 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:00.435 Storage Tag Check Read Support: No 00:08:00.435 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.436 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.436 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.436 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.436 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.436 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.436 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.436 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.436 ===================================================== 00:08:00.436 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:00.436 ===================================================== 00:08:00.436 Controller Capabilities/Features 00:08:00.436 ================================ 00:08:00.436 Vendor ID: 1b36 00:08:00.436 Subsystem Vendor ID: 1af4 00:08:00.436 Serial Number: 12343 00:08:00.436 Model Number: QEMU NVMe Ctrl 00:08:00.436 Firmware Version: 8.0.0 00:08:00.436 Recommended Arb Burst: 6 00:08:00.436 IEEE OUI Identifier: 00 54 52 00:08:00.436 Multi-path I/O 00:08:00.436 May have multiple subsystem ports: No 00:08:00.436 May have multiple controllers: Yes 00:08:00.436 Associated with SR-IOV VF: No 00:08:00.436 Max Data Transfer Size: 524288 00:08:00.436 Max Number of Namespaces: 256 00:08:00.436 Max Number of I/O Queues: 64 00:08:00.436 NVMe Specification Version (VS): 1.4 00:08:00.436 NVMe Specification Version (Identify): 1.4 00:08:00.436 Maximum Queue Entries: 2048 00:08:00.436 Contiguous Queues Required: Yes 00:08:00.436 Arbitration Mechanisms Supported 00:08:00.436 Weighted Round Robin: Not Supported 00:08:00.436 Vendor Specific: Not Supported 00:08:00.436 Reset Timeout: 7500 ms 00:08:00.436 Doorbell Stride: 4 bytes 00:08:00.436 NVM Subsystem Reset: Not Supported 00:08:00.436 Command Sets Supported 00:08:00.436 NVM Command Set: Supported 00:08:00.436 Boot Partition: Not Supported 00:08:00.436 Memory Page Size Minimum: 4096 bytes 00:08:00.436 Memory Page Size Maximum: 65536 bytes 00:08:00.436 Persistent Memory Region: Not Supported 00:08:00.436 Optional Asynchronous Events Supported 00:08:00.436 Namespace Attribute Notices: Supported 00:08:00.436 Firmware Activation Notices: Not Supported 00:08:00.436 ANA Change Notices: Not Supported 00:08:00.436 PLE Aggregate Log Change Notices: Not Supported 00:08:00.436 LBA Status Info Alert Notices: Not Supported 00:08:00.436 EGE Aggregate Log Change Notices: Not Supported 00:08:00.436 Normal NVM Subsystem Shutdown event: Not Supported 00:08:00.436 Zone Descriptor Change Notices: Not Supported 00:08:00.436 Discovery Log Change Notices: Not Supported 00:08:00.436 Controller Attributes 00:08:00.436 128-bit Host Identifier: Not Supported 00:08:00.436 Non-Operational Permissive Mode: Not Supported 00:08:00.436 NVM Sets: Not Supported 00:08:00.436 Read Recovery Levels: Not Supported 00:08:00.436 Endurance Groups: Supported 00:08:00.436 Predictable Latency Mode: Not Supported 00:08:00.436 Traffic Based Keep ALive: Not Supported 00:08:00.436 Namespace Granularity: Not Supported 00:08:00.436 SQ Associations: Not Supported 00:08:00.436 UUID List: Not Supported 00:08:00.436 Multi-Domain Subsystem: Not Supported 00:08:00.436 Fixed Capacity Management: Not Supported 00:08:00.436 Variable Capacity Management: Not Supported 00:08:00.436 Delete Endurance Group: Not Supported 00:08:00.436 Delete NVM Set: Not Supported 00:08:00.436 Extended LBA Formats Supported: Supported 00:08:00.436 Flexible Data Placement Supported: Supported 00:08:00.436 00:08:00.436 Controller Memory Buffer Support 00:08:00.436 ================================ 00:08:00.436 Supported: No 00:08:00.436 00:08:00.436 Persistent Memory Region Support 00:08:00.436 ================================ 00:08:00.436 Supported: No 00:08:00.436 00:08:00.436 Admin Command Set Attributes 00:08:00.436 ============================ 00:08:00.436 Security Send/Receive: Not Supported 00:08:00.436 Format NVM: Supported 00:08:00.436 Firmware Activate/Download: Not Supported 00:08:00.436 Namespace Management: Supported 00:08:00.436 Device Self-Test: Not Supported 00:08:00.436 Directives: Supported 00:08:00.436 NVMe-MI: Not Supported 00:08:00.436 Virtualization Management: Not Supported 00:08:00.436 Doorbell Buffer Config: Supported 00:08:00.436 Get LBA Status Capability: Not Supported 00:08:00.436 Command & Feature Lockdown Capability: Not Supported 00:08:00.436 Abort Command Limit: 4 00:08:00.436 Async Event Request Limit: 4 00:08:00.436 Number of Firmware Slots: N/A 00:08:00.436 Firmware Slot 1 Read-Only: N/A 00:08:00.436 Firmware Activation Without Reset: N/A 00:08:00.436 Multiple Update Detection Support: N/A 00:08:00.436 Firmware Update Granularity: No Information Provided 00:08:00.436 Per-Namespace SMART Log: Yes 00:08:00.436 Asymmetric Namespace Access Log Page: Not Supported 00:08:00.436 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:00.436 Command Effects Log Page: Supported 00:08:00.436 Get Log Page Extended Data: Supported 00:08:00.436 Telemetry Log Pages: Not Supported 00:08:00.436 Persistent Event Log Pages: Not Supported 00:08:00.436 Supported Log Pages Log Page: May Support 00:08:00.436 Commands Supported & Effects Log Page: Not Supported 00:08:00.436 Feature Identifiers & Effects Log Page:May Support 00:08:00.436 NVMe-MI Commands & Effects Log Page: May Support 00:08:00.436 Data Area 4 for Telemetry Log: Not Supported 00:08:00.436 Error Log Page Entries Supported: 1 00:08:00.436 Keep Alive: Not Supported 00:08:00.436 00:08:00.436 NVM Command Set Attributes 00:08:00.436 ========================== 00:08:00.436 Submission Queue Entry Size 00:08:00.436 Max: 64 00:08:00.436 Min: 64 00:08:00.436 Completion Queue Entry Size 00:08:00.436 Max: 16 00:08:00.436 Min: 16 00:08:00.436 Number of Namespaces: 256 00:08:00.436 Compare Command: Supported 00:08:00.436 Write Uncorrectable Command: Not Supported 00:08:00.436 Dataset Management Command: Supported 00:08:00.436 Write Zeroes Command: Supported 00:08:00.436 Set Features Save Field: Supported 00:08:00.436 Reservations: Not Supported 00:08:00.436 Timestamp: Supported 00:08:00.436 Copy: Supported 00:08:00.436 Volatile Write Cache: Present 00:08:00.436 Atomic Write Unit (Normal): 1 00:08:00.436 Atomic Write Unit (PFail): 1 00:08:00.436 Atomic Compare & Write Unit: 1 00:08:00.436 Fused Compare & Write: Not Supported 00:08:00.436 Scatter-Gather List 00:08:00.436 SGL Command Set: Supported 00:08:00.436 SGL Keyed: Not Supported 00:08:00.436 SGL Bit Bucket Descriptor: Not Supported 00:08:00.436 SGL Metadata Pointer: Not Supported 00:08:00.436 Oversized SGL: Not Supported 00:08:00.436 SGL Metadata Address: Not Supported 00:08:00.436 SGL Offset: Not Supported 00:08:00.436 Transport SGL Data Block: Not Supported 00:08:00.436 Replay Protected Memory Block: Not Supported 00:08:00.436 00:08:00.436 Firmware Slot Information 00:08:00.436 ========================= 00:08:00.436 Active slot: 1 00:08:00.436 Slot 1 Firmware Revision: 1.0 00:08:00.436 00:08:00.436 00:08:00.436 Commands Supported and Effects 00:08:00.436 ============================== 00:08:00.436 Admin Commands 00:08:00.436 -------------- 00:08:00.436 Delete I/O Submission Queue (00h): Supported 00:08:00.436 Create I/O Submission Queue (01h): Supported 00:08:00.436 Get Log Page (02h): Supported 00:08:00.436 Delete I/O Completion Queue (04h): Supported 00:08:00.436 Create I/O Completion Queue (05h): Supported 00:08:00.436 Identify (06h): Supported 00:08:00.436 Abort (08h): Supported 00:08:00.436 Set Features (09h): Supported 00:08:00.436 Get Features (0Ah): Supported 00:08:00.436 Asynchronous Event Request (0Ch): Supported 00:08:00.436 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:00.436 Directive Send (19h): Supported 00:08:00.436 Directive Receive (1Ah): Supported 00:08:00.436 Virtualization Management (1Ch): Supported 00:08:00.436 Doorbell Buffer Config (7Ch): Supported 00:08:00.436 Format NVM (80h): Supported LBA-Change 00:08:00.436 I/O Commands 00:08:00.436 ------------ 00:08:00.436 Flush (00h): Supported LBA-Change 00:08:00.436 Write (01h): Supported LBA-Change 00:08:00.436 Read (02h): Supported 00:08:00.436 Compare (05h): Supported 00:08:00.436 Write Zeroes (08h): Supported LBA-Change 00:08:00.436 Dataset Management (09h): Supported LBA-Change 00:08:00.436 Unknown (0Ch): Supported 00:08:00.436 Unknown (12h): Supported 00:08:00.436 Copy (19h): Supported LBA-Change 00:08:00.436 Unknown (1Dh): Supported LBA-Change 00:08:00.436 00:08:00.436 Error Log 00:08:00.436 ========= 00:08:00.436 00:08:00.436 Arbitration 00:08:00.436 =========== 00:08:00.437 Arbitration Burst: no limit 00:08:00.437 00:08:00.437 Power Management 00:08:00.437 ================ 00:08:00.437 Number of Power States: 1 00:08:00.437 Current Power State: Power State #0 00:08:00.437 Power State #0: 00:08:00.437 Max Power: 25.00 W 00:08:00.437 Non-Operational State: Operational 00:08:00.437 Entry Latency: 16 microseconds 00:08:00.437 Exit Latency: 4 microseconds 00:08:00.437 Relative Read Throughput: 0 00:08:00.437 Relative Read Latency: 0 00:08:00.437 Relative Write Throughput: 0 00:08:00.437 Relative Write Latency: 0 00:08:00.437 Idle Power: Not Reported 00:08:00.437 Active Power: Not Reported 00:08:00.437 Non-Operational Permissive Mode: Not Supported 00:08:00.437 00:08:00.437 Health Information 00:08:00.437 ================== 00:08:00.437 Critical Warnings: 00:08:00.437 Available Spare Space: OK 00:08:00.437 Temperature: OK 00:08:00.437 Device Reliability: OK 00:08:00.437 Read Only: No 00:08:00.437 Volatile Memory Backup: OK 00:08:00.437 Current Temperature: 323 Kelvin (50 Celsius) 00:08:00.437 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:00.437 Available Spare: 0% 00:08:00.437 Available Spare Threshold: 0% 00:08:00.437 Life Percentage Used: 0% 00:08:00.437 Data Units Read: 868 00:08:00.437 Data Units Written: 797 00:08:00.437 Host Read Commands: 39017 00:08:00.437 Host Write Commands: 38440 00:08:00.437 Controller Busy Time: 0 minutes 00:08:00.437 Power Cycles: 0 00:08:00.437 Power On Hours: 0 hours 00:08:00.437 Unsafe Shutdowns: 0 00:08:00.437 Unrecoverable Media Errors: 0 00:08:00.437 Lifetime Error Log Entries: 0 00:08:00.437 Warning Temperature Time: 0 minutes 00:08:00.437 Critical Temperature Time: 0 minutes 00:08:00.437 00:08:00.437 Number of Queues 00:08:00.437 ================ 00:08:00.437 Number of I/O Submission Queues: 64 00:08:00.437 Number of I/O Completion Queues: 64 00:08:00.437 00:08:00.437 ZNS Specific Controller Data 00:08:00.437 ============================ 00:08:00.437 Zone Append Size Limit: 0 00:08:00.437 00:08:00.437 00:08:00.437 Active Namespaces 00:08:00.437 ================= 00:08:00.437 Namespace ID:1 00:08:00.437 Error Recovery Timeout: Unlimited 00:08:00.437 Command Set Identifier: NVM (00h) 00:08:00.437 Deallocate: Supported 00:08:00.437 Deallocated/Unwritten Error: Supported 00:08:00.437 Deallocated Read Value: All 0x00 00:08:00.437 Deallocate in Write Zeroes: Not Supported 00:08:00.437 Deallocated Guard Field: 0xFFFF 00:08:00.437 Flush: Supported 00:08:00.437 Reservation: Not Supported 00:08:00.437 Namespace Sharing Capabilities: Multiple Controllers 00:08:00.437 Size (in LBAs): 262144 (1GiB) 00:08:00.437 Capacity (in LBAs): 262144 (1GiB) 00:08:00.437 Utilization (in LBAs): 262144 (1GiB) 00:08:00.437 Thin Provisioning: Not Supported 00:08:00.437 Per-NS Atomic Units: No 00:08:00.437 Maximum Single Source Range Length: 128 00:08:00.437 Maximum Copy Length: 128 00:08:00.437 Maximum Source Range Count: 128 00:08:00.437 NGUID/EUI64 Never Reused: No 00:08:00.437 Namespace Write Protected: No 00:08:00.437 Endurance group ID: 1 00:08:00.437 Number of LBA Formats: 8 00:08:00.437 Current LBA Format: LBA Format #04 00:08:00.437 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:00.437 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:00.437 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:00.437 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:00.437 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:00.437 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:00.437 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:00.437 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:00.437 00:08:00.437 Get Feature FDP: 00:08:00.437 ================ 00:08:00.437 Enabled: Yes 00:08:00.437 FDP configuration index: 0 00:08:00.437 00:08:00.437 FDP configurations log page 00:08:00.437 =========================== 00:08:00.437 Number of FDP configurations: 1 00:08:00.437 Version: 0 00:08:00.437 Size: 112 00:08:00.437 FDP Configuration Descriptor: 0 00:08:00.437 Descriptor Size: 96 00:08:00.437 Reclaim Group Identifier format: 2 00:08:00.437 FDP Volatile Write Cache: Not Present 00:08:00.437 FDP Configuration: Valid 00:08:00.437 Vendor Specific Size: 0 00:08:00.437 Number of Reclaim Groups: 2 00:08:00.437 Number of Recalim Unit Handles: 8 00:08:00.437 Max Placement Identifiers: 128 00:08:00.437 Number of Namespaces Suppprted: 256 00:08:00.437 Reclaim unit Nominal Size: 6000000 bytes 00:08:00.437 Estimated Reclaim Unit Time Limit: Not Reported 00:08:00.437 RUH Desc #000: RUH Type: Initially Isolated 00:08:00.437 RUH Desc #001: RUH Type: Initially Isolated 00:08:00.437 RUH Desc #002: RUH Type: Initially Isolated 00:08:00.437 RUH Desc #003: RUH Type: Initially Isolated 00:08:00.437 RUH Desc #004: RUH Type: Initially Isolated 00:08:00.437 RUH Desc #005: RUH Type: Initially Isolated 00:08:00.437 RUH Desc #006: RUH Type: Initially Isolated 00:08:00.437 RUH Desc #007: RUH Type: Initially Isolated 00:08:00.437 00:08:00.437 FDP reclaim unit handle usage log page 00:08:00.437 ====================================== 00:08:00.437 Number of Reclaim Unit Handles: 8 00:08:00.437 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:00.437 RUH Usage Desc #001: RUH Attributes: Unused 00:08:00.437 RUH Usage Desc #002: RUH Attributes: Unused 00:08:00.437 RUH Usage Desc #003: RUH Attributes: Unused 00:08:00.437 RUH Usage Desc #004: RUH Attributes: Unused 00:08:00.437 RUH Usage Desc #005: RUH Attributes: Unused 00:08:00.437 RUH Usage Desc #006: RUH Attributes: Unused 00:08:00.437 RUH Usage Desc #007: RUH Attributes: Unused 00:08:00.437 00:08:00.437 FDP statistics log page 00:08:00.437 ======================= 00:08:00.437 Host bytes with metadata written: 493985792 00:08:00.437 Medi[2024-11-20 16:34:45.056535] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62829 terminated unexpected 00:08:00.437 a bytes with metadata written: 494039040 00:08:00.437 Media bytes erased: 0 00:08:00.437 00:08:00.437 FDP events log page 00:08:00.437 =================== 00:08:00.437 Number of FDP events: 0 00:08:00.437 00:08:00.437 NVM Specific Namespace Data 00:08:00.437 =========================== 00:08:00.437 Logical Block Storage Tag Mask: 0 00:08:00.437 Protection Information Capabilities: 00:08:00.437 16b Guard Protection Information Storage Tag Support: No 00:08:00.437 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:00.437 Storage Tag Check Read Support: No 00:08:00.437 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.437 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.437 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.437 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.437 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.437 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.437 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.437 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.437 ===================================================== 00:08:00.437 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:00.437 ===================================================== 00:08:00.437 Controller Capabilities/Features 00:08:00.437 ================================ 00:08:00.437 Vendor ID: 1b36 00:08:00.437 Subsystem Vendor ID: 1af4 00:08:00.437 Serial Number: 12342 00:08:00.437 Model Number: QEMU NVMe Ctrl 00:08:00.437 Firmware Version: 8.0.0 00:08:00.437 Recommended Arb Burst: 6 00:08:00.437 IEEE OUI Identifier: 00 54 52 00:08:00.437 Multi-path I/O 00:08:00.437 May have multiple subsystem ports: No 00:08:00.437 May have multiple controllers: No 00:08:00.437 Associated with SR-IOV VF: No 00:08:00.437 Max Data Transfer Size: 524288 00:08:00.437 Max Number of Namespaces: 256 00:08:00.437 Max Number of I/O Queues: 64 00:08:00.437 NVMe Specification Version (VS): 1.4 00:08:00.437 NVMe Specification Version (Identify): 1.4 00:08:00.437 Maximum Queue Entries: 2048 00:08:00.437 Contiguous Queues Required: Yes 00:08:00.437 Arbitration Mechanisms Supported 00:08:00.437 Weighted Round Robin: Not Supported 00:08:00.437 Vendor Specific: Not Supported 00:08:00.437 Reset Timeout: 7500 ms 00:08:00.437 Doorbell Stride: 4 bytes 00:08:00.437 NVM Subsystem Reset: Not Supported 00:08:00.437 Command Sets Supported 00:08:00.437 NVM Command Set: Supported 00:08:00.438 Boot Partition: Not Supported 00:08:00.438 Memory Page Size Minimum: 4096 bytes 00:08:00.438 Memory Page Size Maximum: 65536 bytes 00:08:00.438 Persistent Memory Region: Not Supported 00:08:00.438 Optional Asynchronous Events Supported 00:08:00.438 Namespace Attribute Notices: Supported 00:08:00.438 Firmware Activation Notices: Not Supported 00:08:00.438 ANA Change Notices: Not Supported 00:08:00.438 PLE Aggregate Log Change Notices: Not Supported 00:08:00.438 LBA Status Info Alert Notices: Not Supported 00:08:00.438 EGE Aggregate Log Change Notices: Not Supported 00:08:00.438 Normal NVM Subsystem Shutdown event: Not Supported 00:08:00.438 Zone Descriptor Change Notices: Not Supported 00:08:00.438 Discovery Log Change Notices: Not Supported 00:08:00.438 Controller Attributes 00:08:00.438 128-bit Host Identifier: Not Supported 00:08:00.438 Non-Operational Permissive Mode: Not Supported 00:08:00.438 NVM Sets: Not Supported 00:08:00.438 Read Recovery Levels: Not Supported 00:08:00.438 Endurance Groups: Not Supported 00:08:00.438 Predictable Latency Mode: Not Supported 00:08:00.438 Traffic Based Keep ALive: Not Supported 00:08:00.438 Namespace Granularity: Not Supported 00:08:00.438 SQ Associations: Not Supported 00:08:00.438 UUID List: Not Supported 00:08:00.438 Multi-Domain Subsystem: Not Supported 00:08:00.438 Fixed Capacity Management: Not Supported 00:08:00.438 Variable Capacity Management: Not Supported 00:08:00.438 Delete Endurance Group: Not Supported 00:08:00.438 Delete NVM Set: Not Supported 00:08:00.438 Extended LBA Formats Supported: Supported 00:08:00.438 Flexible Data Placement Supported: Not Supported 00:08:00.438 00:08:00.438 Controller Memory Buffer Support 00:08:00.438 ================================ 00:08:00.438 Supported: No 00:08:00.438 00:08:00.438 Persistent Memory Region Support 00:08:00.438 ================================ 00:08:00.438 Supported: No 00:08:00.438 00:08:00.438 Admin Command Set Attributes 00:08:00.438 ============================ 00:08:00.438 Security Send/Receive: Not Supported 00:08:00.438 Format NVM: Supported 00:08:00.438 Firmware Activate/Download: Not Supported 00:08:00.438 Namespace Management: Supported 00:08:00.438 Device Self-Test: Not Supported 00:08:00.438 Directives: Supported 00:08:00.438 NVMe-MI: Not Supported 00:08:00.438 Virtualization Management: Not Supported 00:08:00.438 Doorbell Buffer Config: Supported 00:08:00.438 Get LBA Status Capability: Not Supported 00:08:00.438 Command & Feature Lockdown Capability: Not Supported 00:08:00.438 Abort Command Limit: 4 00:08:00.438 Async Event Request Limit: 4 00:08:00.438 Number of Firmware Slots: N/A 00:08:00.438 Firmware Slot 1 Read-Only: N/A 00:08:00.438 Firmware Activation Without Reset: N/A 00:08:00.438 Multiple Update Detection Support: N/A 00:08:00.438 Firmware Update Granularity: No Information Provided 00:08:00.438 Per-Namespace SMART Log: Yes 00:08:00.438 Asymmetric Namespace Access Log Page: Not Supported 00:08:00.438 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:00.438 Command Effects Log Page: Supported 00:08:00.438 Get Log Page Extended Data: Supported 00:08:00.438 Telemetry Log Pages: Not Supported 00:08:00.438 Persistent Event Log Pages: Not Supported 00:08:00.438 Supported Log Pages Log Page: May Support 00:08:00.438 Commands Supported & Effects Log Page: Not Supported 00:08:00.438 Feature Identifiers & Effects Log Page:May Support 00:08:00.438 NVMe-MI Commands & Effects Log Page: May Support 00:08:00.438 Data Area 4 for Telemetry Log: Not Supported 00:08:00.438 Error Log Page Entries Supported: 1 00:08:00.438 Keep Alive: Not Supported 00:08:00.438 00:08:00.438 NVM Command Set Attributes 00:08:00.438 ========================== 00:08:00.438 Submission Queue Entry Size 00:08:00.438 Max: 64 00:08:00.438 Min: 64 00:08:00.438 Completion Queue Entry Size 00:08:00.438 Max: 16 00:08:00.438 Min: 16 00:08:00.438 Number of Namespaces: 256 00:08:00.438 Compare Command: Supported 00:08:00.438 Write Uncorrectable Command: Not Supported 00:08:00.438 Dataset Management Command: Supported 00:08:00.438 Write Zeroes Command: Supported 00:08:00.438 Set Features Save Field: Supported 00:08:00.438 Reservations: Not Supported 00:08:00.438 Timestamp: Supported 00:08:00.438 Copy: Supported 00:08:00.438 Volatile Write Cache: Present 00:08:00.438 Atomic Write Unit (Normal): 1 00:08:00.438 Atomic Write Unit (PFail): 1 00:08:00.438 Atomic Compare & Write Unit: 1 00:08:00.438 Fused Compare & Write: Not Supported 00:08:00.438 Scatter-Gather List 00:08:00.438 SGL Command Set: Supported 00:08:00.438 SGL Keyed: Not Supported 00:08:00.438 SGL Bit Bucket Descriptor: Not Supported 00:08:00.438 SGL Metadata Pointer: Not Supported 00:08:00.438 Oversized SGL: Not Supported 00:08:00.438 SGL Metadata Address: Not Supported 00:08:00.438 SGL Offset: Not Supported 00:08:00.438 Transport SGL Data Block: Not Supported 00:08:00.438 Replay Protected Memory Block: Not Supported 00:08:00.438 00:08:00.438 Firmware Slot Information 00:08:00.438 ========================= 00:08:00.438 Active slot: 1 00:08:00.438 Slot 1 Firmware Revision: 1.0 00:08:00.438 00:08:00.438 00:08:00.438 Commands Supported and Effects 00:08:00.438 ============================== 00:08:00.438 Admin Commands 00:08:00.438 -------------- 00:08:00.438 Delete I/O Submission Queue (00h): Supported 00:08:00.438 Create I/O Submission Queue (01h): Supported 00:08:00.438 Get Log Page (02h): Supported 00:08:00.438 Delete I/O Completion Queue (04h): Supported 00:08:00.438 Create I/O Completion Queue (05h): Supported 00:08:00.438 Identify (06h): Supported 00:08:00.438 Abort (08h): Supported 00:08:00.438 Set Features (09h): Supported 00:08:00.438 Get Features (0Ah): Supported 00:08:00.438 Asynchronous Event Request (0Ch): Supported 00:08:00.438 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:00.438 Directive Send (19h): Supported 00:08:00.438 Directive Receive (1Ah): Supported 00:08:00.438 Virtualization Management (1Ch): Supported 00:08:00.438 Doorbell Buffer Config (7Ch): Supported 00:08:00.438 Format NVM (80h): Supported LBA-Change 00:08:00.438 I/O Commands 00:08:00.438 ------------ 00:08:00.438 Flush (00h): Supported LBA-Change 00:08:00.438 Write (01h): Supported LBA-Change 00:08:00.438 Read (02h): Supported 00:08:00.438 Compare (05h): Supported 00:08:00.438 Write Zeroes (08h): Supported LBA-Change 00:08:00.438 Dataset Management (09h): Supported LBA-Change 00:08:00.438 Unknown (0Ch): Supported 00:08:00.438 Unknown (12h): Supported 00:08:00.438 Copy (19h): Supported LBA-Change 00:08:00.438 Unknown (1Dh): Supported LBA-Change 00:08:00.438 00:08:00.438 Error Log 00:08:00.438 ========= 00:08:00.438 00:08:00.438 Arbitration 00:08:00.438 =========== 00:08:00.438 Arbitration Burst: no limit 00:08:00.438 00:08:00.438 Power Management 00:08:00.438 ================ 00:08:00.438 Number of Power States: 1 00:08:00.438 Current Power State: Power State #0 00:08:00.438 Power State #0: 00:08:00.438 Max Power: 25.00 W 00:08:00.438 Non-Operational State: Operational 00:08:00.438 Entry Latency: 16 microseconds 00:08:00.438 Exit Latency: 4 microseconds 00:08:00.439 Relative Read Throughput: 0 00:08:00.439 Relative Read Latency: 0 00:08:00.439 Relative Write Throughput: 0 00:08:00.439 Relative Write Latency: 0 00:08:00.439 Idle Power: Not Reported 00:08:00.439 Active Power: Not Reported 00:08:00.439 Non-Operational Permissive Mode: Not Supported 00:08:00.439 00:08:00.439 Health Information 00:08:00.439 ================== 00:08:00.439 Critical Warnings: 00:08:00.439 Available Spare Space: OK 00:08:00.439 Temperature: OK 00:08:00.439 Device Reliability: OK 00:08:00.439 Read Only: No 00:08:00.439 Volatile Memory Backup: OK 00:08:00.439 Current Temperature: 323 Kelvin (50 Celsius) 00:08:00.439 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:00.439 Available Spare: 0% 00:08:00.439 Available Spare Threshold: 0% 00:08:00.439 Life Percentage Used: 0% 00:08:00.439 Data Units Read: 2191 00:08:00.439 Data Units Written: 1979 00:08:00.439 Host Read Commands: 113407 00:08:00.439 Host Write Commands: 111676 00:08:00.439 Controller Busy Time: 0 minutes 00:08:00.439 Power Cycles: 0 00:08:00.439 Power On Hours: 0 hours 00:08:00.439 Unsafe Shutdowns: 0 00:08:00.439 Unrecoverable Media Errors: 0 00:08:00.439 Lifetime Error Log Entries: 0 00:08:00.439 Warning Temperature Time: 0 minutes 00:08:00.439 Critical Temperature Time: 0 minutes 00:08:00.439 00:08:00.439 Number of Queues 00:08:00.439 ================ 00:08:00.439 Number of I/O Submission Queues: 64 00:08:00.439 Number of I/O Completion Queues: 64 00:08:00.439 00:08:00.439 ZNS Specific Controller Data 00:08:00.439 ============================ 00:08:00.439 Zone Append Size Limit: 0 00:08:00.439 00:08:00.439 00:08:00.439 Active Namespaces 00:08:00.439 ================= 00:08:00.439 Namespace ID:1 00:08:00.439 Error Recovery Timeout: Unlimited 00:08:00.439 Command Set Identifier: NVM (00h) 00:08:00.439 Deallocate: Supported 00:08:00.439 Deallocated/Unwritten Error: Supported 00:08:00.439 Deallocated Read Value: All 0x00 00:08:00.439 Deallocate in Write Zeroes: Not Supported 00:08:00.439 Deallocated Guard Field: 0xFFFF 00:08:00.439 Flush: Supported 00:08:00.439 Reservation: Not Supported 00:08:00.439 Namespace Sharing Capabilities: Private 00:08:00.439 Size (in LBAs): 1048576 (4GiB) 00:08:00.439 Capacity (in LBAs): 1048576 (4GiB) 00:08:00.439 Utilization (in LBAs): 1048576 (4GiB) 00:08:00.439 Thin Provisioning: Not Supported 00:08:00.439 Per-NS Atomic Units: No 00:08:00.439 Maximum Single Source Range Length: 128 00:08:00.439 Maximum Copy Length: 128 00:08:00.439 Maximum Source Range Count: 128 00:08:00.439 NGUID/EUI64 Never Reused: No 00:08:00.439 Namespace Write Protected: No 00:08:00.439 Number of LBA Formats: 8 00:08:00.439 Current LBA Format: LBA Format #04 00:08:00.439 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:00.439 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:00.439 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:00.439 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:00.439 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:00.439 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:00.439 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:00.439 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:00.439 00:08:00.439 NVM Specific Namespace Data 00:08:00.439 =========================== 00:08:00.439 Logical Block Storage Tag Mask: 0 00:08:00.439 Protection Information Capabilities: 00:08:00.439 16b Guard Protection Information Storage Tag Support: No 00:08:00.439 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:00.439 Storage Tag Check Read Support: No 00:08:00.439 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.439 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.439 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.439 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.439 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.439 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.439 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.439 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.439 Namespace ID:2 00:08:00.439 Error Recovery Timeout: Unlimited 00:08:00.439 Command Set Identifier: NVM (00h) 00:08:00.439 Deallocate: Supported 00:08:00.439 Deallocated/Unwritten Error: Supported 00:08:00.439 Deallocated Read Value: All 0x00 00:08:00.439 Deallocate in Write Zeroes: Not Supported 00:08:00.439 Deallocated Guard Field: 0xFFFF 00:08:00.439 Flush: Supported 00:08:00.439 Reservation: Not Supported 00:08:00.439 Namespace Sharing Capabilities: Private 00:08:00.439 Size (in LBAs): 1048576 (4GiB) 00:08:00.439 Capacity (in LBAs): 1048576 (4GiB) 00:08:00.439 Utilization (in LBAs): 1048576 (4GiB) 00:08:00.439 Thin Provisioning: Not Supported 00:08:00.439 Per-NS Atomic Units: No 00:08:00.439 Maximum Single Source Range Length: 128 00:08:00.439 Maximum Copy Length: 128 00:08:00.439 Maximum Source Range Count: 128 00:08:00.439 NGUID/EUI64 Never Reused: No 00:08:00.439 Namespace Write Protected: No 00:08:00.439 Number of LBA Formats: 8 00:08:00.439 Current LBA Format: LBA Format #04 00:08:00.439 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:00.439 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:00.439 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:00.439 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:00.439 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:00.439 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:00.439 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:00.439 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:00.439 00:08:00.439 NVM Specific Namespace Data 00:08:00.439 =========================== 00:08:00.439 Logical Block Storage Tag Mask: 0 00:08:00.439 Protection Information Capabilities: 00:08:00.439 16b Guard Protection Information Storage Tag Support: No 00:08:00.439 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:00.439 Storage Tag Check Read Support: No 00:08:00.439 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.439 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.439 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.439 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.439 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.439 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.439 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.439 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.439 Namespace ID:3 00:08:00.439 Error Recovery Timeout: Unlimited 00:08:00.439 Command Set Identifier: NVM (00h) 00:08:00.439 Deallocate: Supported 00:08:00.439 Deallocated/Unwritten Error: Supported 00:08:00.439 Deallocated Read Value: All 0x00 00:08:00.439 Deallocate in Write Zeroes: Not Supported 00:08:00.439 Deallocated Guard Field: 0xFFFF 00:08:00.439 Flush: Supported 00:08:00.439 Reservation: Not Supported 00:08:00.439 Namespace Sharing Capabilities: Private 00:08:00.439 Size (in LBAs): 1048576 (4GiB) 00:08:00.439 Capacity (in LBAs): 1048576 (4GiB) 00:08:00.439 Utilization (in LBAs): 1048576 (4GiB) 00:08:00.439 Thin Provisioning: Not Supported 00:08:00.439 Per-NS Atomic Units: No 00:08:00.439 Maximum Single Source Range Length: 128 00:08:00.439 Maximum Copy Length: 128 00:08:00.439 Maximum Source Range Count: 128 00:08:00.439 NGUID/EUI64 Never Reused: No 00:08:00.439 Namespace Write Protected: No 00:08:00.439 Number of LBA Formats: 8 00:08:00.439 Current LBA Format: LBA Format #04 00:08:00.439 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:00.439 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:00.439 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:00.439 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:00.439 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:00.439 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:00.439 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:00.439 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:00.439 00:08:00.439 NVM Specific Namespace Data 00:08:00.439 =========================== 00:08:00.439 Logical Block Storage Tag Mask: 0 00:08:00.439 Protection Information Capabilities: 00:08:00.439 16b Guard Protection Information Storage Tag Support: No 00:08:00.439 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:00.439 Storage Tag Check Read Support: No 00:08:00.439 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.439 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.439 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.439 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.440 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.440 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.440 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.440 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.440 16:34:45 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:00.440 16:34:45 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:00.702 ===================================================== 00:08:00.702 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:00.702 ===================================================== 00:08:00.702 Controller Capabilities/Features 00:08:00.702 ================================ 00:08:00.702 Vendor ID: 1b36 00:08:00.702 Subsystem Vendor ID: 1af4 00:08:00.702 Serial Number: 12340 00:08:00.702 Model Number: QEMU NVMe Ctrl 00:08:00.702 Firmware Version: 8.0.0 00:08:00.702 Recommended Arb Burst: 6 00:08:00.702 IEEE OUI Identifier: 00 54 52 00:08:00.702 Multi-path I/O 00:08:00.702 May have multiple subsystem ports: No 00:08:00.702 May have multiple controllers: No 00:08:00.702 Associated with SR-IOV VF: No 00:08:00.702 Max Data Transfer Size: 524288 00:08:00.702 Max Number of Namespaces: 256 00:08:00.702 Max Number of I/O Queues: 64 00:08:00.702 NVMe Specification Version (VS): 1.4 00:08:00.702 NVMe Specification Version (Identify): 1.4 00:08:00.702 Maximum Queue Entries: 2048 00:08:00.702 Contiguous Queues Required: Yes 00:08:00.702 Arbitration Mechanisms Supported 00:08:00.702 Weighted Round Robin: Not Supported 00:08:00.702 Vendor Specific: Not Supported 00:08:00.702 Reset Timeout: 7500 ms 00:08:00.702 Doorbell Stride: 4 bytes 00:08:00.702 NVM Subsystem Reset: Not Supported 00:08:00.702 Command Sets Supported 00:08:00.702 NVM Command Set: Supported 00:08:00.702 Boot Partition: Not Supported 00:08:00.702 Memory Page Size Minimum: 4096 bytes 00:08:00.702 Memory Page Size Maximum: 65536 bytes 00:08:00.702 Persistent Memory Region: Not Supported 00:08:00.702 Optional Asynchronous Events Supported 00:08:00.702 Namespace Attribute Notices: Supported 00:08:00.702 Firmware Activation Notices: Not Supported 00:08:00.702 ANA Change Notices: Not Supported 00:08:00.702 PLE Aggregate Log Change Notices: Not Supported 00:08:00.702 LBA Status Info Alert Notices: Not Supported 00:08:00.702 EGE Aggregate Log Change Notices: Not Supported 00:08:00.702 Normal NVM Subsystem Shutdown event: Not Supported 00:08:00.702 Zone Descriptor Change Notices: Not Supported 00:08:00.702 Discovery Log Change Notices: Not Supported 00:08:00.702 Controller Attributes 00:08:00.702 128-bit Host Identifier: Not Supported 00:08:00.702 Non-Operational Permissive Mode: Not Supported 00:08:00.702 NVM Sets: Not Supported 00:08:00.702 Read Recovery Levels: Not Supported 00:08:00.702 Endurance Groups: Not Supported 00:08:00.702 Predictable Latency Mode: Not Supported 00:08:00.702 Traffic Based Keep ALive: Not Supported 00:08:00.702 Namespace Granularity: Not Supported 00:08:00.702 SQ Associations: Not Supported 00:08:00.702 UUID List: Not Supported 00:08:00.702 Multi-Domain Subsystem: Not Supported 00:08:00.702 Fixed Capacity Management: Not Supported 00:08:00.702 Variable Capacity Management: Not Supported 00:08:00.702 Delete Endurance Group: Not Supported 00:08:00.702 Delete NVM Set: Not Supported 00:08:00.702 Extended LBA Formats Supported: Supported 00:08:00.702 Flexible Data Placement Supported: Not Supported 00:08:00.702 00:08:00.702 Controller Memory Buffer Support 00:08:00.702 ================================ 00:08:00.702 Supported: No 00:08:00.702 00:08:00.702 Persistent Memory Region Support 00:08:00.702 ================================ 00:08:00.702 Supported: No 00:08:00.702 00:08:00.702 Admin Command Set Attributes 00:08:00.702 ============================ 00:08:00.702 Security Send/Receive: Not Supported 00:08:00.702 Format NVM: Supported 00:08:00.702 Firmware Activate/Download: Not Supported 00:08:00.702 Namespace Management: Supported 00:08:00.702 Device Self-Test: Not Supported 00:08:00.702 Directives: Supported 00:08:00.702 NVMe-MI: Not Supported 00:08:00.702 Virtualization Management: Not Supported 00:08:00.702 Doorbell Buffer Config: Supported 00:08:00.702 Get LBA Status Capability: Not Supported 00:08:00.702 Command & Feature Lockdown Capability: Not Supported 00:08:00.702 Abort Command Limit: 4 00:08:00.702 Async Event Request Limit: 4 00:08:00.702 Number of Firmware Slots: N/A 00:08:00.702 Firmware Slot 1 Read-Only: N/A 00:08:00.702 Firmware Activation Without Reset: N/A 00:08:00.702 Multiple Update Detection Support: N/A 00:08:00.702 Firmware Update Granularity: No Information Provided 00:08:00.702 Per-Namespace SMART Log: Yes 00:08:00.702 Asymmetric Namespace Access Log Page: Not Supported 00:08:00.702 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:00.702 Command Effects Log Page: Supported 00:08:00.702 Get Log Page Extended Data: Supported 00:08:00.702 Telemetry Log Pages: Not Supported 00:08:00.702 Persistent Event Log Pages: Not Supported 00:08:00.702 Supported Log Pages Log Page: May Support 00:08:00.702 Commands Supported & Effects Log Page: Not Supported 00:08:00.702 Feature Identifiers & Effects Log Page:May Support 00:08:00.702 NVMe-MI Commands & Effects Log Page: May Support 00:08:00.702 Data Area 4 for Telemetry Log: Not Supported 00:08:00.702 Error Log Page Entries Supported: 1 00:08:00.702 Keep Alive: Not Supported 00:08:00.702 00:08:00.702 NVM Command Set Attributes 00:08:00.702 ========================== 00:08:00.702 Submission Queue Entry Size 00:08:00.702 Max: 64 00:08:00.702 Min: 64 00:08:00.702 Completion Queue Entry Size 00:08:00.702 Max: 16 00:08:00.702 Min: 16 00:08:00.702 Number of Namespaces: 256 00:08:00.702 Compare Command: Supported 00:08:00.702 Write Uncorrectable Command: Not Supported 00:08:00.702 Dataset Management Command: Supported 00:08:00.702 Write Zeroes Command: Supported 00:08:00.702 Set Features Save Field: Supported 00:08:00.702 Reservations: Not Supported 00:08:00.702 Timestamp: Supported 00:08:00.702 Copy: Supported 00:08:00.702 Volatile Write Cache: Present 00:08:00.702 Atomic Write Unit (Normal): 1 00:08:00.702 Atomic Write Unit (PFail): 1 00:08:00.702 Atomic Compare & Write Unit: 1 00:08:00.702 Fused Compare & Write: Not Supported 00:08:00.702 Scatter-Gather List 00:08:00.702 SGL Command Set: Supported 00:08:00.702 SGL Keyed: Not Supported 00:08:00.702 SGL Bit Bucket Descriptor: Not Supported 00:08:00.702 SGL Metadata Pointer: Not Supported 00:08:00.702 Oversized SGL: Not Supported 00:08:00.702 SGL Metadata Address: Not Supported 00:08:00.702 SGL Offset: Not Supported 00:08:00.702 Transport SGL Data Block: Not Supported 00:08:00.702 Replay Protected Memory Block: Not Supported 00:08:00.702 00:08:00.702 Firmware Slot Information 00:08:00.702 ========================= 00:08:00.702 Active slot: 1 00:08:00.702 Slot 1 Firmware Revision: 1.0 00:08:00.702 00:08:00.702 00:08:00.702 Commands Supported and Effects 00:08:00.702 ============================== 00:08:00.702 Admin Commands 00:08:00.702 -------------- 00:08:00.702 Delete I/O Submission Queue (00h): Supported 00:08:00.702 Create I/O Submission Queue (01h): Supported 00:08:00.702 Get Log Page (02h): Supported 00:08:00.702 Delete I/O Completion Queue (04h): Supported 00:08:00.702 Create I/O Completion Queue (05h): Supported 00:08:00.702 Identify (06h): Supported 00:08:00.702 Abort (08h): Supported 00:08:00.702 Set Features (09h): Supported 00:08:00.702 Get Features (0Ah): Supported 00:08:00.702 Asynchronous Event Request (0Ch): Supported 00:08:00.702 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:00.702 Directive Send (19h): Supported 00:08:00.703 Directive Receive (1Ah): Supported 00:08:00.703 Virtualization Management (1Ch): Supported 00:08:00.703 Doorbell Buffer Config (7Ch): Supported 00:08:00.703 Format NVM (80h): Supported LBA-Change 00:08:00.703 I/O Commands 00:08:00.703 ------------ 00:08:00.703 Flush (00h): Supported LBA-Change 00:08:00.703 Write (01h): Supported LBA-Change 00:08:00.703 Read (02h): Supported 00:08:00.703 Compare (05h): Supported 00:08:00.703 Write Zeroes (08h): Supported LBA-Change 00:08:00.703 Dataset Management (09h): Supported LBA-Change 00:08:00.703 Unknown (0Ch): Supported 00:08:00.703 Unknown (12h): Supported 00:08:00.703 Copy (19h): Supported LBA-Change 00:08:00.703 Unknown (1Dh): Supported LBA-Change 00:08:00.703 00:08:00.703 Error Log 00:08:00.703 ========= 00:08:00.703 00:08:00.703 Arbitration 00:08:00.703 =========== 00:08:00.703 Arbitration Burst: no limit 00:08:00.703 00:08:00.703 Power Management 00:08:00.703 ================ 00:08:00.703 Number of Power States: 1 00:08:00.703 Current Power State: Power State #0 00:08:00.703 Power State #0: 00:08:00.703 Max Power: 25.00 W 00:08:00.703 Non-Operational State: Operational 00:08:00.703 Entry Latency: 16 microseconds 00:08:00.703 Exit Latency: 4 microseconds 00:08:00.703 Relative Read Throughput: 0 00:08:00.703 Relative Read Latency: 0 00:08:00.703 Relative Write Throughput: 0 00:08:00.703 Relative Write Latency: 0 00:08:00.703 Idle Power: Not Reported 00:08:00.703 Active Power: Not Reported 00:08:00.703 Non-Operational Permissive Mode: Not Supported 00:08:00.703 00:08:00.703 Health Information 00:08:00.703 ================== 00:08:00.703 Critical Warnings: 00:08:00.703 Available Spare Space: OK 00:08:00.703 Temperature: OK 00:08:00.703 Device Reliability: OK 00:08:00.703 Read Only: No 00:08:00.703 Volatile Memory Backup: OK 00:08:00.703 Current Temperature: 323 Kelvin (50 Celsius) 00:08:00.703 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:00.703 Available Spare: 0% 00:08:00.703 Available Spare Threshold: 0% 00:08:00.703 Life Percentage Used: 0% 00:08:00.703 Data Units Read: 648 00:08:00.703 Data Units Written: 576 00:08:00.703 Host Read Commands: 36953 00:08:00.703 Host Write Commands: 36739 00:08:00.703 Controller Busy Time: 0 minutes 00:08:00.703 Power Cycles: 0 00:08:00.703 Power On Hours: 0 hours 00:08:00.703 Unsafe Shutdowns: 0 00:08:00.703 Unrecoverable Media Errors: 0 00:08:00.703 Lifetime Error Log Entries: 0 00:08:00.703 Warning Temperature Time: 0 minutes 00:08:00.703 Critical Temperature Time: 0 minutes 00:08:00.703 00:08:00.703 Number of Queues 00:08:00.703 ================ 00:08:00.703 Number of I/O Submission Queues: 64 00:08:00.703 Number of I/O Completion Queues: 64 00:08:00.703 00:08:00.703 ZNS Specific Controller Data 00:08:00.703 ============================ 00:08:00.703 Zone Append Size Limit: 0 00:08:00.703 00:08:00.703 00:08:00.703 Active Namespaces 00:08:00.703 ================= 00:08:00.703 Namespace ID:1 00:08:00.703 Error Recovery Timeout: Unlimited 00:08:00.703 Command Set Identifier: NVM (00h) 00:08:00.703 Deallocate: Supported 00:08:00.703 Deallocated/Unwritten Error: Supported 00:08:00.703 Deallocated Read Value: All 0x00 00:08:00.703 Deallocate in Write Zeroes: Not Supported 00:08:00.703 Deallocated Guard Field: 0xFFFF 00:08:00.703 Flush: Supported 00:08:00.703 Reservation: Not Supported 00:08:00.703 Metadata Transferred as: Separate Metadata Buffer 00:08:00.703 Namespace Sharing Capabilities: Private 00:08:00.703 Size (in LBAs): 1548666 (5GiB) 00:08:00.703 Capacity (in LBAs): 1548666 (5GiB) 00:08:00.703 Utilization (in LBAs): 1548666 (5GiB) 00:08:00.703 Thin Provisioning: Not Supported 00:08:00.703 Per-NS Atomic Units: No 00:08:00.703 Maximum Single Source Range Length: 128 00:08:00.703 Maximum Copy Length: 128 00:08:00.703 Maximum Source Range Count: 128 00:08:00.703 NGUID/EUI64 Never Reused: No 00:08:00.703 Namespace Write Protected: No 00:08:00.703 Number of LBA Formats: 8 00:08:00.703 Current LBA Format: LBA Format #07 00:08:00.703 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:00.703 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:00.703 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:00.703 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:00.703 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:00.703 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:00.703 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:00.703 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:00.703 00:08:00.703 NVM Specific Namespace Data 00:08:00.703 =========================== 00:08:00.703 Logical Block Storage Tag Mask: 0 00:08:00.703 Protection Information Capabilities: 00:08:00.703 16b Guard Protection Information Storage Tag Support: No 00:08:00.703 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:00.703 Storage Tag Check Read Support: No 00:08:00.703 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.703 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.703 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.703 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.703 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.703 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.703 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.703 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.703 16:34:45 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:00.703 16:34:45 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:00.967 ===================================================== 00:08:00.967 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:00.967 ===================================================== 00:08:00.967 Controller Capabilities/Features 00:08:00.967 ================================ 00:08:00.967 Vendor ID: 1b36 00:08:00.967 Subsystem Vendor ID: 1af4 00:08:00.967 Serial Number: 12341 00:08:00.967 Model Number: QEMU NVMe Ctrl 00:08:00.967 Firmware Version: 8.0.0 00:08:00.967 Recommended Arb Burst: 6 00:08:00.967 IEEE OUI Identifier: 00 54 52 00:08:00.967 Multi-path I/O 00:08:00.967 May have multiple subsystem ports: No 00:08:00.967 May have multiple controllers: No 00:08:00.967 Associated with SR-IOV VF: No 00:08:00.967 Max Data Transfer Size: 524288 00:08:00.967 Max Number of Namespaces: 256 00:08:00.967 Max Number of I/O Queues: 64 00:08:00.967 NVMe Specification Version (VS): 1.4 00:08:00.967 NVMe Specification Version (Identify): 1.4 00:08:00.967 Maximum Queue Entries: 2048 00:08:00.967 Contiguous Queues Required: Yes 00:08:00.967 Arbitration Mechanisms Supported 00:08:00.967 Weighted Round Robin: Not Supported 00:08:00.967 Vendor Specific: Not Supported 00:08:00.967 Reset Timeout: 7500 ms 00:08:00.967 Doorbell Stride: 4 bytes 00:08:00.967 NVM Subsystem Reset: Not Supported 00:08:00.967 Command Sets Supported 00:08:00.967 NVM Command Set: Supported 00:08:00.967 Boot Partition: Not Supported 00:08:00.967 Memory Page Size Minimum: 4096 bytes 00:08:00.967 Memory Page Size Maximum: 65536 bytes 00:08:00.967 Persistent Memory Region: Not Supported 00:08:00.967 Optional Asynchronous Events Supported 00:08:00.967 Namespace Attribute Notices: Supported 00:08:00.967 Firmware Activation Notices: Not Supported 00:08:00.967 ANA Change Notices: Not Supported 00:08:00.967 PLE Aggregate Log Change Notices: Not Supported 00:08:00.967 LBA Status Info Alert Notices: Not Supported 00:08:00.967 EGE Aggregate Log Change Notices: Not Supported 00:08:00.967 Normal NVM Subsystem Shutdown event: Not Supported 00:08:00.967 Zone Descriptor Change Notices: Not Supported 00:08:00.967 Discovery Log Change Notices: Not Supported 00:08:00.967 Controller Attributes 00:08:00.967 128-bit Host Identifier: Not Supported 00:08:00.967 Non-Operational Permissive Mode: Not Supported 00:08:00.967 NVM Sets: Not Supported 00:08:00.967 Read Recovery Levels: Not Supported 00:08:00.967 Endurance Groups: Not Supported 00:08:00.967 Predictable Latency Mode: Not Supported 00:08:00.967 Traffic Based Keep ALive: Not Supported 00:08:00.967 Namespace Granularity: Not Supported 00:08:00.967 SQ Associations: Not Supported 00:08:00.967 UUID List: Not Supported 00:08:00.967 Multi-Domain Subsystem: Not Supported 00:08:00.967 Fixed Capacity Management: Not Supported 00:08:00.967 Variable Capacity Management: Not Supported 00:08:00.967 Delete Endurance Group: Not Supported 00:08:00.967 Delete NVM Set: Not Supported 00:08:00.967 Extended LBA Formats Supported: Supported 00:08:00.967 Flexible Data Placement Supported: Not Supported 00:08:00.967 00:08:00.967 Controller Memory Buffer Support 00:08:00.967 ================================ 00:08:00.967 Supported: No 00:08:00.967 00:08:00.967 Persistent Memory Region Support 00:08:00.967 ================================ 00:08:00.967 Supported: No 00:08:00.967 00:08:00.967 Admin Command Set Attributes 00:08:00.967 ============================ 00:08:00.967 Security Send/Receive: Not Supported 00:08:00.967 Format NVM: Supported 00:08:00.967 Firmware Activate/Download: Not Supported 00:08:00.967 Namespace Management: Supported 00:08:00.967 Device Self-Test: Not Supported 00:08:00.967 Directives: Supported 00:08:00.967 NVMe-MI: Not Supported 00:08:00.967 Virtualization Management: Not Supported 00:08:00.967 Doorbell Buffer Config: Supported 00:08:00.967 Get LBA Status Capability: Not Supported 00:08:00.967 Command & Feature Lockdown Capability: Not Supported 00:08:00.967 Abort Command Limit: 4 00:08:00.967 Async Event Request Limit: 4 00:08:00.967 Number of Firmware Slots: N/A 00:08:00.967 Firmware Slot 1 Read-Only: N/A 00:08:00.967 Firmware Activation Without Reset: N/A 00:08:00.967 Multiple Update Detection Support: N/A 00:08:00.967 Firmware Update Granularity: No Information Provided 00:08:00.967 Per-Namespace SMART Log: Yes 00:08:00.967 Asymmetric Namespace Access Log Page: Not Supported 00:08:00.967 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:00.967 Command Effects Log Page: Supported 00:08:00.967 Get Log Page Extended Data: Supported 00:08:00.967 Telemetry Log Pages: Not Supported 00:08:00.967 Persistent Event Log Pages: Not Supported 00:08:00.967 Supported Log Pages Log Page: May Support 00:08:00.967 Commands Supported & Effects Log Page: Not Supported 00:08:00.967 Feature Identifiers & Effects Log Page:May Support 00:08:00.967 NVMe-MI Commands & Effects Log Page: May Support 00:08:00.968 Data Area 4 for Telemetry Log: Not Supported 00:08:00.968 Error Log Page Entries Supported: 1 00:08:00.968 Keep Alive: Not Supported 00:08:00.968 00:08:00.968 NVM Command Set Attributes 00:08:00.968 ========================== 00:08:00.968 Submission Queue Entry Size 00:08:00.968 Max: 64 00:08:00.968 Min: 64 00:08:00.968 Completion Queue Entry Size 00:08:00.968 Max: 16 00:08:00.968 Min: 16 00:08:00.968 Number of Namespaces: 256 00:08:00.968 Compare Command: Supported 00:08:00.968 Write Uncorrectable Command: Not Supported 00:08:00.968 Dataset Management Command: Supported 00:08:00.968 Write Zeroes Command: Supported 00:08:00.968 Set Features Save Field: Supported 00:08:00.968 Reservations: Not Supported 00:08:00.968 Timestamp: Supported 00:08:00.968 Copy: Supported 00:08:00.968 Volatile Write Cache: Present 00:08:00.968 Atomic Write Unit (Normal): 1 00:08:00.968 Atomic Write Unit (PFail): 1 00:08:00.968 Atomic Compare & Write Unit: 1 00:08:00.968 Fused Compare & Write: Not Supported 00:08:00.968 Scatter-Gather List 00:08:00.968 SGL Command Set: Supported 00:08:00.968 SGL Keyed: Not Supported 00:08:00.968 SGL Bit Bucket Descriptor: Not Supported 00:08:00.968 SGL Metadata Pointer: Not Supported 00:08:00.968 Oversized SGL: Not Supported 00:08:00.968 SGL Metadata Address: Not Supported 00:08:00.968 SGL Offset: Not Supported 00:08:00.968 Transport SGL Data Block: Not Supported 00:08:00.968 Replay Protected Memory Block: Not Supported 00:08:00.968 00:08:00.968 Firmware Slot Information 00:08:00.968 ========================= 00:08:00.968 Active slot: 1 00:08:00.968 Slot 1 Firmware Revision: 1.0 00:08:00.968 00:08:00.968 00:08:00.968 Commands Supported and Effects 00:08:00.968 ============================== 00:08:00.968 Admin Commands 00:08:00.968 -------------- 00:08:00.968 Delete I/O Submission Queue (00h): Supported 00:08:00.968 Create I/O Submission Queue (01h): Supported 00:08:00.968 Get Log Page (02h): Supported 00:08:00.968 Delete I/O Completion Queue (04h): Supported 00:08:00.968 Create I/O Completion Queue (05h): Supported 00:08:00.968 Identify (06h): Supported 00:08:00.968 Abort (08h): Supported 00:08:00.968 Set Features (09h): Supported 00:08:00.968 Get Features (0Ah): Supported 00:08:00.968 Asynchronous Event Request (0Ch): Supported 00:08:00.968 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:00.968 Directive Send (19h): Supported 00:08:00.968 Directive Receive (1Ah): Supported 00:08:00.968 Virtualization Management (1Ch): Supported 00:08:00.968 Doorbell Buffer Config (7Ch): Supported 00:08:00.968 Format NVM (80h): Supported LBA-Change 00:08:00.968 I/O Commands 00:08:00.968 ------------ 00:08:00.968 Flush (00h): Supported LBA-Change 00:08:00.968 Write (01h): Supported LBA-Change 00:08:00.968 Read (02h): Supported 00:08:00.968 Compare (05h): Supported 00:08:00.968 Write Zeroes (08h): Supported LBA-Change 00:08:00.968 Dataset Management (09h): Supported LBA-Change 00:08:00.968 Unknown (0Ch): Supported 00:08:00.968 Unknown (12h): Supported 00:08:00.968 Copy (19h): Supported LBA-Change 00:08:00.968 Unknown (1Dh): Supported LBA-Change 00:08:00.968 00:08:00.968 Error Log 00:08:00.968 ========= 00:08:00.968 00:08:00.968 Arbitration 00:08:00.968 =========== 00:08:00.968 Arbitration Burst: no limit 00:08:00.968 00:08:00.968 Power Management 00:08:00.968 ================ 00:08:00.968 Number of Power States: 1 00:08:00.968 Current Power State: Power State #0 00:08:00.968 Power State #0: 00:08:00.968 Max Power: 25.00 W 00:08:00.968 Non-Operational State: Operational 00:08:00.968 Entry Latency: 16 microseconds 00:08:00.968 Exit Latency: 4 microseconds 00:08:00.968 Relative Read Throughput: 0 00:08:00.968 Relative Read Latency: 0 00:08:00.968 Relative Write Throughput: 0 00:08:00.968 Relative Write Latency: 0 00:08:00.968 Idle Power: Not Reported 00:08:00.968 Active Power: Not Reported 00:08:00.968 Non-Operational Permissive Mode: Not Supported 00:08:00.968 00:08:00.968 Health Information 00:08:00.968 ================== 00:08:00.968 Critical Warnings: 00:08:00.968 Available Spare Space: OK 00:08:00.968 Temperature: OK 00:08:00.968 Device Reliability: OK 00:08:00.968 Read Only: No 00:08:00.968 Volatile Memory Backup: OK 00:08:00.968 Current Temperature: 323 Kelvin (50 Celsius) 00:08:00.968 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:00.968 Available Spare: 0% 00:08:00.968 Available Spare Threshold: 0% 00:08:00.968 Life Percentage Used: 0% 00:08:00.968 Data Units Read: 1037 00:08:00.968 Data Units Written: 897 00:08:00.968 Host Read Commands: 54610 00:08:00.968 Host Write Commands: 53299 00:08:00.968 Controller Busy Time: 0 minutes 00:08:00.968 Power Cycles: 0 00:08:00.968 Power On Hours: 0 hours 00:08:00.968 Unsafe Shutdowns: 0 00:08:00.968 Unrecoverable Media Errors: 0 00:08:00.968 Lifetime Error Log Entries: 0 00:08:00.968 Warning Temperature Time: 0 minutes 00:08:00.968 Critical Temperature Time: 0 minutes 00:08:00.968 00:08:00.968 Number of Queues 00:08:00.968 ================ 00:08:00.968 Number of I/O Submission Queues: 64 00:08:00.968 Number of I/O Completion Queues: 64 00:08:00.968 00:08:00.968 ZNS Specific Controller Data 00:08:00.968 ============================ 00:08:00.968 Zone Append Size Limit: 0 00:08:00.968 00:08:00.968 00:08:00.968 Active Namespaces 00:08:00.968 ================= 00:08:00.968 Namespace ID:1 00:08:00.968 Error Recovery Timeout: Unlimited 00:08:00.968 Command Set Identifier: NVM (00h) 00:08:00.968 Deallocate: Supported 00:08:00.968 Deallocated/Unwritten Error: Supported 00:08:00.968 Deallocated Read Value: All 0x00 00:08:00.968 Deallocate in Write Zeroes: Not Supported 00:08:00.968 Deallocated Guard Field: 0xFFFF 00:08:00.968 Flush: Supported 00:08:00.968 Reservation: Not Supported 00:08:00.969 Namespace Sharing Capabilities: Private 00:08:00.969 Size (in LBAs): 1310720 (5GiB) 00:08:00.969 Capacity (in LBAs): 1310720 (5GiB) 00:08:00.969 Utilization (in LBAs): 1310720 (5GiB) 00:08:00.969 Thin Provisioning: Not Supported 00:08:00.969 Per-NS Atomic Units: No 00:08:00.969 Maximum Single Source Range Length: 128 00:08:00.969 Maximum Copy Length: 128 00:08:00.969 Maximum Source Range Count: 128 00:08:00.969 NGUID/EUI64 Never Reused: No 00:08:00.969 Namespace Write Protected: No 00:08:00.969 Number of LBA Formats: 8 00:08:00.969 Current LBA Format: LBA Format #04 00:08:00.969 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:00.969 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:00.969 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:00.969 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:00.969 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:00.969 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:00.969 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:00.969 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:00.969 00:08:00.969 NVM Specific Namespace Data 00:08:00.969 =========================== 00:08:00.969 Logical Block Storage Tag Mask: 0 00:08:00.969 Protection Information Capabilities: 00:08:00.969 16b Guard Protection Information Storage Tag Support: No 00:08:00.969 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:00.969 Storage Tag Check Read Support: No 00:08:00.969 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.969 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.969 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.969 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.969 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.969 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.969 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.969 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.969 16:34:45 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:00.969 16:34:45 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:00.969 ===================================================== 00:08:00.969 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:00.969 ===================================================== 00:08:00.969 Controller Capabilities/Features 00:08:00.969 ================================ 00:08:00.969 Vendor ID: 1b36 00:08:00.969 Subsystem Vendor ID: 1af4 00:08:00.969 Serial Number: 12342 00:08:00.969 Model Number: QEMU NVMe Ctrl 00:08:00.969 Firmware Version: 8.0.0 00:08:00.969 Recommended Arb Burst: 6 00:08:00.969 IEEE OUI Identifier: 00 54 52 00:08:00.969 Multi-path I/O 00:08:00.969 May have multiple subsystem ports: No 00:08:00.969 May have multiple controllers: No 00:08:00.969 Associated with SR-IOV VF: No 00:08:00.969 Max Data Transfer Size: 524288 00:08:00.969 Max Number of Namespaces: 256 00:08:00.969 Max Number of I/O Queues: 64 00:08:00.969 NVMe Specification Version (VS): 1.4 00:08:00.969 NVMe Specification Version (Identify): 1.4 00:08:00.969 Maximum Queue Entries: 2048 00:08:00.969 Contiguous Queues Required: Yes 00:08:00.969 Arbitration Mechanisms Supported 00:08:00.969 Weighted Round Robin: Not Supported 00:08:00.969 Vendor Specific: Not Supported 00:08:00.969 Reset Timeout: 7500 ms 00:08:00.969 Doorbell Stride: 4 bytes 00:08:00.969 NVM Subsystem Reset: Not Supported 00:08:00.969 Command Sets Supported 00:08:00.969 NVM Command Set: Supported 00:08:00.969 Boot Partition: Not Supported 00:08:00.969 Memory Page Size Minimum: 4096 bytes 00:08:00.969 Memory Page Size Maximum: 65536 bytes 00:08:00.969 Persistent Memory Region: Not Supported 00:08:00.969 Optional Asynchronous Events Supported 00:08:00.969 Namespace Attribute Notices: Supported 00:08:00.969 Firmware Activation Notices: Not Supported 00:08:00.969 ANA Change Notices: Not Supported 00:08:00.969 PLE Aggregate Log Change Notices: Not Supported 00:08:00.969 LBA Status Info Alert Notices: Not Supported 00:08:00.969 EGE Aggregate Log Change Notices: Not Supported 00:08:00.969 Normal NVM Subsystem Shutdown event: Not Supported 00:08:00.969 Zone Descriptor Change Notices: Not Supported 00:08:00.969 Discovery Log Change Notices: Not Supported 00:08:00.969 Controller Attributes 00:08:00.969 128-bit Host Identifier: Not Supported 00:08:00.969 Non-Operational Permissive Mode: Not Supported 00:08:00.969 NVM Sets: Not Supported 00:08:00.969 Read Recovery Levels: Not Supported 00:08:00.969 Endurance Groups: Not Supported 00:08:00.969 Predictable Latency Mode: Not Supported 00:08:00.969 Traffic Based Keep ALive: Not Supported 00:08:00.969 Namespace Granularity: Not Supported 00:08:00.969 SQ Associations: Not Supported 00:08:00.969 UUID List: Not Supported 00:08:00.969 Multi-Domain Subsystem: Not Supported 00:08:00.969 Fixed Capacity Management: Not Supported 00:08:00.969 Variable Capacity Management: Not Supported 00:08:00.969 Delete Endurance Group: Not Supported 00:08:00.969 Delete NVM Set: Not Supported 00:08:00.969 Extended LBA Formats Supported: Supported 00:08:00.969 Flexible Data Placement Supported: Not Supported 00:08:00.969 00:08:00.969 Controller Memory Buffer Support 00:08:00.969 ================================ 00:08:00.969 Supported: No 00:08:00.969 00:08:00.969 Persistent Memory Region Support 00:08:00.969 ================================ 00:08:00.969 Supported: No 00:08:00.969 00:08:00.969 Admin Command Set Attributes 00:08:00.969 ============================ 00:08:00.969 Security Send/Receive: Not Supported 00:08:00.969 Format NVM: Supported 00:08:00.969 Firmware Activate/Download: Not Supported 00:08:00.969 Namespace Management: Supported 00:08:00.969 Device Self-Test: Not Supported 00:08:00.969 Directives: Supported 00:08:00.969 NVMe-MI: Not Supported 00:08:00.969 Virtualization Management: Not Supported 00:08:00.969 Doorbell Buffer Config: Supported 00:08:00.969 Get LBA Status Capability: Not Supported 00:08:00.969 Command & Feature Lockdown Capability: Not Supported 00:08:00.969 Abort Command Limit: 4 00:08:00.969 Async Event Request Limit: 4 00:08:00.969 Number of Firmware Slots: N/A 00:08:00.969 Firmware Slot 1 Read-Only: N/A 00:08:00.969 Firmware Activation Without Reset: N/A 00:08:00.969 Multiple Update Detection Support: N/A 00:08:00.969 Firmware Update Granularity: No Information Provided 00:08:00.969 Per-Namespace SMART Log: Yes 00:08:00.969 Asymmetric Namespace Access Log Page: Not Supported 00:08:00.969 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:00.969 Command Effects Log Page: Supported 00:08:00.969 Get Log Page Extended Data: Supported 00:08:00.970 Telemetry Log Pages: Not Supported 00:08:00.970 Persistent Event Log Pages: Not Supported 00:08:00.970 Supported Log Pages Log Page: May Support 00:08:00.970 Commands Supported & Effects Log Page: Not Supported 00:08:00.970 Feature Identifiers & Effects Log Page:May Support 00:08:00.970 NVMe-MI Commands & Effects Log Page: May Support 00:08:00.970 Data Area 4 for Telemetry Log: Not Supported 00:08:00.970 Error Log Page Entries Supported: 1 00:08:00.970 Keep Alive: Not Supported 00:08:00.970 00:08:00.970 NVM Command Set Attributes 00:08:00.970 ========================== 00:08:00.970 Submission Queue Entry Size 00:08:00.970 Max: 64 00:08:00.970 Min: 64 00:08:00.970 Completion Queue Entry Size 00:08:00.970 Max: 16 00:08:00.970 Min: 16 00:08:00.970 Number of Namespaces: 256 00:08:00.970 Compare Command: Supported 00:08:00.970 Write Uncorrectable Command: Not Supported 00:08:00.970 Dataset Management Command: Supported 00:08:00.970 Write Zeroes Command: Supported 00:08:00.970 Set Features Save Field: Supported 00:08:00.970 Reservations: Not Supported 00:08:00.970 Timestamp: Supported 00:08:00.970 Copy: Supported 00:08:00.970 Volatile Write Cache: Present 00:08:00.970 Atomic Write Unit (Normal): 1 00:08:00.970 Atomic Write Unit (PFail): 1 00:08:00.970 Atomic Compare & Write Unit: 1 00:08:00.970 Fused Compare & Write: Not Supported 00:08:00.970 Scatter-Gather List 00:08:00.970 SGL Command Set: Supported 00:08:00.970 SGL Keyed: Not Supported 00:08:00.970 SGL Bit Bucket Descriptor: Not Supported 00:08:00.970 SGL Metadata Pointer: Not Supported 00:08:00.970 Oversized SGL: Not Supported 00:08:00.970 SGL Metadata Address: Not Supported 00:08:00.970 SGL Offset: Not Supported 00:08:00.970 Transport SGL Data Block: Not Supported 00:08:00.970 Replay Protected Memory Block: Not Supported 00:08:00.970 00:08:00.970 Firmware Slot Information 00:08:00.970 ========================= 00:08:00.970 Active slot: 1 00:08:00.970 Slot 1 Firmware Revision: 1.0 00:08:00.970 00:08:00.970 00:08:00.970 Commands Supported and Effects 00:08:00.970 ============================== 00:08:00.970 Admin Commands 00:08:00.970 -------------- 00:08:00.970 Delete I/O Submission Queue (00h): Supported 00:08:00.970 Create I/O Submission Queue (01h): Supported 00:08:00.970 Get Log Page (02h): Supported 00:08:00.970 Delete I/O Completion Queue (04h): Supported 00:08:00.970 Create I/O Completion Queue (05h): Supported 00:08:00.970 Identify (06h): Supported 00:08:00.970 Abort (08h): Supported 00:08:00.970 Set Features (09h): Supported 00:08:00.970 Get Features (0Ah): Supported 00:08:00.970 Asynchronous Event Request (0Ch): Supported 00:08:00.970 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:00.970 Directive Send (19h): Supported 00:08:00.970 Directive Receive (1Ah): Supported 00:08:00.970 Virtualization Management (1Ch): Supported 00:08:00.970 Doorbell Buffer Config (7Ch): Supported 00:08:00.970 Format NVM (80h): Supported LBA-Change 00:08:00.970 I/O Commands 00:08:00.970 ------------ 00:08:00.970 Flush (00h): Supported LBA-Change 00:08:00.970 Write (01h): Supported LBA-Change 00:08:00.970 Read (02h): Supported 00:08:00.970 Compare (05h): Supported 00:08:00.970 Write Zeroes (08h): Supported LBA-Change 00:08:00.970 Dataset Management (09h): Supported LBA-Change 00:08:00.970 Unknown (0Ch): Supported 00:08:00.970 Unknown (12h): Supported 00:08:00.970 Copy (19h): Supported LBA-Change 00:08:00.970 Unknown (1Dh): Supported LBA-Change 00:08:00.970 00:08:00.970 Error Log 00:08:00.970 ========= 00:08:00.970 00:08:00.970 Arbitration 00:08:00.970 =========== 00:08:00.970 Arbitration Burst: no limit 00:08:00.970 00:08:00.970 Power Management 00:08:00.970 ================ 00:08:00.970 Number of Power States: 1 00:08:00.970 Current Power State: Power State #0 00:08:00.970 Power State #0: 00:08:00.970 Max Power: 25.00 W 00:08:00.970 Non-Operational State: Operational 00:08:00.970 Entry Latency: 16 microseconds 00:08:00.970 Exit Latency: 4 microseconds 00:08:00.970 Relative Read Throughput: 0 00:08:00.970 Relative Read Latency: 0 00:08:00.970 Relative Write Throughput: 0 00:08:00.970 Relative Write Latency: 0 00:08:00.970 Idle Power: Not Reported 00:08:00.970 Active Power: Not Reported 00:08:00.970 Non-Operational Permissive Mode: Not Supported 00:08:00.970 00:08:00.970 Health Information 00:08:00.970 ================== 00:08:00.970 Critical Warnings: 00:08:00.970 Available Spare Space: OK 00:08:00.970 Temperature: OK 00:08:00.970 Device Reliability: OK 00:08:00.970 Read Only: No 00:08:00.970 Volatile Memory Backup: OK 00:08:00.970 Current Temperature: 323 Kelvin (50 Celsius) 00:08:00.970 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:00.970 Available Spare: 0% 00:08:00.970 Available Spare Threshold: 0% 00:08:00.970 Life Percentage Used: 0% 00:08:00.970 Data Units Read: 2191 00:08:00.970 Data Units Written: 1979 00:08:00.970 Host Read Commands: 113407 00:08:00.970 Host Write Commands: 111676 00:08:00.970 Controller Busy Time: 0 minutes 00:08:00.970 Power Cycles: 0 00:08:00.970 Power On Hours: 0 hours 00:08:00.970 Unsafe Shutdowns: 0 00:08:00.970 Unrecoverable Media Errors: 0 00:08:00.970 Lifetime Error Log Entries: 0 00:08:00.970 Warning Temperature Time: 0 minutes 00:08:00.970 Critical Temperature Time: 0 minutes 00:08:00.970 00:08:00.970 Number of Queues 00:08:00.970 ================ 00:08:00.970 Number of I/O Submission Queues: 64 00:08:00.970 Number of I/O Completion Queues: 64 00:08:00.970 00:08:00.970 ZNS Specific Controller Data 00:08:00.970 ============================ 00:08:00.970 Zone Append Size Limit: 0 00:08:00.970 00:08:00.970 00:08:00.970 Active Namespaces 00:08:00.970 ================= 00:08:00.970 Namespace ID:1 00:08:00.970 Error Recovery Timeout: Unlimited 00:08:00.970 Command Set Identifier: NVM (00h) 00:08:00.970 Deallocate: Supported 00:08:00.970 Deallocated/Unwritten Error: Supported 00:08:00.970 Deallocated Read Value: All 0x00 00:08:00.970 Deallocate in Write Zeroes: Not Supported 00:08:00.970 Deallocated Guard Field: 0xFFFF 00:08:00.970 Flush: Supported 00:08:00.970 Reservation: Not Supported 00:08:00.970 Namespace Sharing Capabilities: Private 00:08:00.970 Size (in LBAs): 1048576 (4GiB) 00:08:00.970 Capacity (in LBAs): 1048576 (4GiB) 00:08:00.970 Utilization (in LBAs): 1048576 (4GiB) 00:08:00.970 Thin Provisioning: Not Supported 00:08:00.971 Per-NS Atomic Units: No 00:08:00.971 Maximum Single Source Range Length: 128 00:08:00.971 Maximum Copy Length: 128 00:08:00.971 Maximum Source Range Count: 128 00:08:00.971 NGUID/EUI64 Never Reused: No 00:08:00.971 Namespace Write Protected: No 00:08:00.971 Number of LBA Formats: 8 00:08:00.971 Current LBA Format: LBA Format #04 00:08:00.971 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:00.971 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:00.971 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:00.971 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:00.971 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:00.971 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:00.971 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:00.971 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:00.971 00:08:00.971 NVM Specific Namespace Data 00:08:00.971 =========================== 00:08:00.971 Logical Block Storage Tag Mask: 0 00:08:00.971 Protection Information Capabilities: 00:08:00.971 16b Guard Protection Information Storage Tag Support: No 00:08:00.971 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:00.971 Storage Tag Check Read Support: No 00:08:00.971 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.971 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.971 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.971 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.971 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.971 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.971 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.971 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.971 Namespace ID:2 00:08:00.971 Error Recovery Timeout: Unlimited 00:08:00.971 Command Set Identifier: NVM (00h) 00:08:00.971 Deallocate: Supported 00:08:00.971 Deallocated/Unwritten Error: Supported 00:08:00.971 Deallocated Read Value: All 0x00 00:08:00.971 Deallocate in Write Zeroes: Not Supported 00:08:00.971 Deallocated Guard Field: 0xFFFF 00:08:00.971 Flush: Supported 00:08:00.971 Reservation: Not Supported 00:08:00.971 Namespace Sharing Capabilities: Private 00:08:00.971 Size (in LBAs): 1048576 (4GiB) 00:08:00.971 Capacity (in LBAs): 1048576 (4GiB) 00:08:00.971 Utilization (in LBAs): 1048576 (4GiB) 00:08:00.971 Thin Provisioning: Not Supported 00:08:00.971 Per-NS Atomic Units: No 00:08:00.971 Maximum Single Source Range Length: 128 00:08:00.971 Maximum Copy Length: 128 00:08:00.971 Maximum Source Range Count: 128 00:08:00.971 NGUID/EUI64 Never Reused: No 00:08:00.971 Namespace Write Protected: No 00:08:00.971 Number of LBA Formats: 8 00:08:00.971 Current LBA Format: LBA Format #04 00:08:00.971 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:00.971 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:00.971 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:00.971 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:00.971 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:00.971 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:00.971 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:00.971 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:00.971 00:08:00.971 NVM Specific Namespace Data 00:08:00.971 =========================== 00:08:00.971 Logical Block Storage Tag Mask: 0 00:08:00.971 Protection Information Capabilities: 00:08:00.971 16b Guard Protection Information Storage Tag Support: No 00:08:00.971 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:00.971 Storage Tag Check Read Support: No 00:08:00.971 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.971 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.971 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.971 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.971 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.971 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.971 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.971 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.971 Namespace ID:3 00:08:00.971 Error Recovery Timeout: Unlimited 00:08:00.971 Command Set Identifier: NVM (00h) 00:08:00.971 Deallocate: Supported 00:08:00.971 Deallocated/Unwritten Error: Supported 00:08:00.971 Deallocated Read Value: All 0x00 00:08:00.971 Deallocate in Write Zeroes: Not Supported 00:08:00.971 Deallocated Guard Field: 0xFFFF 00:08:00.971 Flush: Supported 00:08:00.971 Reservation: Not Supported 00:08:00.971 Namespace Sharing Capabilities: Private 00:08:00.971 Size (in LBAs): 1048576 (4GiB) 00:08:00.971 Capacity (in LBAs): 1048576 (4GiB) 00:08:00.971 Utilization (in LBAs): 1048576 (4GiB) 00:08:00.971 Thin Provisioning: Not Supported 00:08:00.971 Per-NS Atomic Units: No 00:08:00.971 Maximum Single Source Range Length: 128 00:08:00.971 Maximum Copy Length: 128 00:08:00.971 Maximum Source Range Count: 128 00:08:00.971 NGUID/EUI64 Never Reused: No 00:08:00.971 Namespace Write Protected: No 00:08:00.971 Number of LBA Formats: 8 00:08:00.971 Current LBA Format: LBA Format #04 00:08:00.971 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:00.971 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:00.971 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:00.971 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:00.971 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:00.971 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:00.971 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:00.971 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:00.971 00:08:00.971 NVM Specific Namespace Data 00:08:00.971 =========================== 00:08:00.971 Logical Block Storage Tag Mask: 0 00:08:00.971 Protection Information Capabilities: 00:08:00.971 16b Guard Protection Information Storage Tag Support: No 00:08:00.971 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:01.282 Storage Tag Check Read Support: No 00:08:01.282 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.282 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.282 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.282 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.282 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.282 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.282 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.282 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.282 16:34:45 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:01.282 16:34:45 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:01.282 ===================================================== 00:08:01.282 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:01.282 ===================================================== 00:08:01.282 Controller Capabilities/Features 00:08:01.282 ================================ 00:08:01.282 Vendor ID: 1b36 00:08:01.282 Subsystem Vendor ID: 1af4 00:08:01.282 Serial Number: 12343 00:08:01.282 Model Number: QEMU NVMe Ctrl 00:08:01.282 Firmware Version: 8.0.0 00:08:01.282 Recommended Arb Burst: 6 00:08:01.282 IEEE OUI Identifier: 00 54 52 00:08:01.282 Multi-path I/O 00:08:01.282 May have multiple subsystem ports: No 00:08:01.282 May have multiple controllers: Yes 00:08:01.282 Associated with SR-IOV VF: No 00:08:01.282 Max Data Transfer Size: 524288 00:08:01.282 Max Number of Namespaces: 256 00:08:01.282 Max Number of I/O Queues: 64 00:08:01.282 NVMe Specification Version (VS): 1.4 00:08:01.282 NVMe Specification Version (Identify): 1.4 00:08:01.282 Maximum Queue Entries: 2048 00:08:01.282 Contiguous Queues Required: Yes 00:08:01.282 Arbitration Mechanisms Supported 00:08:01.282 Weighted Round Robin: Not Supported 00:08:01.282 Vendor Specific: Not Supported 00:08:01.282 Reset Timeout: 7500 ms 00:08:01.282 Doorbell Stride: 4 bytes 00:08:01.282 NVM Subsystem Reset: Not Supported 00:08:01.282 Command Sets Supported 00:08:01.282 NVM Command Set: Supported 00:08:01.282 Boot Partition: Not Supported 00:08:01.282 Memory Page Size Minimum: 4096 bytes 00:08:01.282 Memory Page Size Maximum: 65536 bytes 00:08:01.282 Persistent Memory Region: Not Supported 00:08:01.282 Optional Asynchronous Events Supported 00:08:01.282 Namespace Attribute Notices: Supported 00:08:01.282 Firmware Activation Notices: Not Supported 00:08:01.282 ANA Change Notices: Not Supported 00:08:01.282 PLE Aggregate Log Change Notices: Not Supported 00:08:01.282 LBA Status Info Alert Notices: Not Supported 00:08:01.282 EGE Aggregate Log Change Notices: Not Supported 00:08:01.282 Normal NVM Subsystem Shutdown event: Not Supported 00:08:01.282 Zone Descriptor Change Notices: Not Supported 00:08:01.282 Discovery Log Change Notices: Not Supported 00:08:01.282 Controller Attributes 00:08:01.282 128-bit Host Identifier: Not Supported 00:08:01.282 Non-Operational Permissive Mode: Not Supported 00:08:01.282 NVM Sets: Not Supported 00:08:01.282 Read Recovery Levels: Not Supported 00:08:01.282 Endurance Groups: Supported 00:08:01.282 Predictable Latency Mode: Not Supported 00:08:01.282 Traffic Based Keep ALive: Not Supported 00:08:01.282 Namespace Granularity: Not Supported 00:08:01.282 SQ Associations: Not Supported 00:08:01.282 UUID List: Not Supported 00:08:01.282 Multi-Domain Subsystem: Not Supported 00:08:01.282 Fixed Capacity Management: Not Supported 00:08:01.282 Variable Capacity Management: Not Supported 00:08:01.282 Delete Endurance Group: Not Supported 00:08:01.282 Delete NVM Set: Not Supported 00:08:01.282 Extended LBA Formats Supported: Supported 00:08:01.282 Flexible Data Placement Supported: Supported 00:08:01.282 00:08:01.282 Controller Memory Buffer Support 00:08:01.282 ================================ 00:08:01.282 Supported: No 00:08:01.282 00:08:01.282 Persistent Memory Region Support 00:08:01.282 ================================ 00:08:01.282 Supported: No 00:08:01.282 00:08:01.282 Admin Command Set Attributes 00:08:01.282 ============================ 00:08:01.282 Security Send/Receive: Not Supported 00:08:01.282 Format NVM: Supported 00:08:01.282 Firmware Activate/Download: Not Supported 00:08:01.282 Namespace Management: Supported 00:08:01.282 Device Self-Test: Not Supported 00:08:01.282 Directives: Supported 00:08:01.282 NVMe-MI: Not Supported 00:08:01.282 Virtualization Management: Not Supported 00:08:01.282 Doorbell Buffer Config: Supported 00:08:01.282 Get LBA Status Capability: Not Supported 00:08:01.282 Command & Feature Lockdown Capability: Not Supported 00:08:01.282 Abort Command Limit: 4 00:08:01.282 Async Event Request Limit: 4 00:08:01.282 Number of Firmware Slots: N/A 00:08:01.282 Firmware Slot 1 Read-Only: N/A 00:08:01.282 Firmware Activation Without Reset: N/A 00:08:01.282 Multiple Update Detection Support: N/A 00:08:01.282 Firmware Update Granularity: No Information Provided 00:08:01.282 Per-Namespace SMART Log: Yes 00:08:01.282 Asymmetric Namespace Access Log Page: Not Supported 00:08:01.282 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:01.282 Command Effects Log Page: Supported 00:08:01.282 Get Log Page Extended Data: Supported 00:08:01.282 Telemetry Log Pages: Not Supported 00:08:01.282 Persistent Event Log Pages: Not Supported 00:08:01.282 Supported Log Pages Log Page: May Support 00:08:01.282 Commands Supported & Effects Log Page: Not Supported 00:08:01.282 Feature Identifiers & Effects Log Page:May Support 00:08:01.282 NVMe-MI Commands & Effects Log Page: May Support 00:08:01.282 Data Area 4 for Telemetry Log: Not Supported 00:08:01.282 Error Log Page Entries Supported: 1 00:08:01.282 Keep Alive: Not Supported 00:08:01.282 00:08:01.282 NVM Command Set Attributes 00:08:01.282 ========================== 00:08:01.282 Submission Queue Entry Size 00:08:01.282 Max: 64 00:08:01.282 Min: 64 00:08:01.282 Completion Queue Entry Size 00:08:01.282 Max: 16 00:08:01.282 Min: 16 00:08:01.282 Number of Namespaces: 256 00:08:01.282 Compare Command: Supported 00:08:01.283 Write Uncorrectable Command: Not Supported 00:08:01.283 Dataset Management Command: Supported 00:08:01.283 Write Zeroes Command: Supported 00:08:01.283 Set Features Save Field: Supported 00:08:01.283 Reservations: Not Supported 00:08:01.283 Timestamp: Supported 00:08:01.283 Copy: Supported 00:08:01.283 Volatile Write Cache: Present 00:08:01.283 Atomic Write Unit (Normal): 1 00:08:01.283 Atomic Write Unit (PFail): 1 00:08:01.283 Atomic Compare & Write Unit: 1 00:08:01.283 Fused Compare & Write: Not Supported 00:08:01.283 Scatter-Gather List 00:08:01.283 SGL Command Set: Supported 00:08:01.283 SGL Keyed: Not Supported 00:08:01.283 SGL Bit Bucket Descriptor: Not Supported 00:08:01.283 SGL Metadata Pointer: Not Supported 00:08:01.283 Oversized SGL: Not Supported 00:08:01.283 SGL Metadata Address: Not Supported 00:08:01.283 SGL Offset: Not Supported 00:08:01.283 Transport SGL Data Block: Not Supported 00:08:01.283 Replay Protected Memory Block: Not Supported 00:08:01.283 00:08:01.283 Firmware Slot Information 00:08:01.283 ========================= 00:08:01.283 Active slot: 1 00:08:01.283 Slot 1 Firmware Revision: 1.0 00:08:01.283 00:08:01.283 00:08:01.283 Commands Supported and Effects 00:08:01.283 ============================== 00:08:01.283 Admin Commands 00:08:01.283 -------------- 00:08:01.283 Delete I/O Submission Queue (00h): Supported 00:08:01.283 Create I/O Submission Queue (01h): Supported 00:08:01.283 Get Log Page (02h): Supported 00:08:01.283 Delete I/O Completion Queue (04h): Supported 00:08:01.283 Create I/O Completion Queue (05h): Supported 00:08:01.283 Identify (06h): Supported 00:08:01.283 Abort (08h): Supported 00:08:01.283 Set Features (09h): Supported 00:08:01.283 Get Features (0Ah): Supported 00:08:01.283 Asynchronous Event Request (0Ch): Supported 00:08:01.283 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:01.283 Directive Send (19h): Supported 00:08:01.283 Directive Receive (1Ah): Supported 00:08:01.283 Virtualization Management (1Ch): Supported 00:08:01.283 Doorbell Buffer Config (7Ch): Supported 00:08:01.283 Format NVM (80h): Supported LBA-Change 00:08:01.283 I/O Commands 00:08:01.283 ------------ 00:08:01.283 Flush (00h): Supported LBA-Change 00:08:01.283 Write (01h): Supported LBA-Change 00:08:01.283 Read (02h): Supported 00:08:01.283 Compare (05h): Supported 00:08:01.283 Write Zeroes (08h): Supported LBA-Change 00:08:01.283 Dataset Management (09h): Supported LBA-Change 00:08:01.283 Unknown (0Ch): Supported 00:08:01.283 Unknown (12h): Supported 00:08:01.283 Copy (19h): Supported LBA-Change 00:08:01.283 Unknown (1Dh): Supported LBA-Change 00:08:01.283 00:08:01.283 Error Log 00:08:01.283 ========= 00:08:01.283 00:08:01.283 Arbitration 00:08:01.283 =========== 00:08:01.283 Arbitration Burst: no limit 00:08:01.283 00:08:01.283 Power Management 00:08:01.283 ================ 00:08:01.283 Number of Power States: 1 00:08:01.283 Current Power State: Power State #0 00:08:01.283 Power State #0: 00:08:01.283 Max Power: 25.00 W 00:08:01.283 Non-Operational State: Operational 00:08:01.283 Entry Latency: 16 microseconds 00:08:01.283 Exit Latency: 4 microseconds 00:08:01.283 Relative Read Throughput: 0 00:08:01.283 Relative Read Latency: 0 00:08:01.283 Relative Write Throughput: 0 00:08:01.283 Relative Write Latency: 0 00:08:01.283 Idle Power: Not Reported 00:08:01.283 Active Power: Not Reported 00:08:01.283 Non-Operational Permissive Mode: Not Supported 00:08:01.283 00:08:01.283 Health Information 00:08:01.283 ================== 00:08:01.283 Critical Warnings: 00:08:01.283 Available Spare Space: OK 00:08:01.283 Temperature: OK 00:08:01.283 Device Reliability: OK 00:08:01.283 Read Only: No 00:08:01.283 Volatile Memory Backup: OK 00:08:01.283 Current Temperature: 323 Kelvin (50 Celsius) 00:08:01.283 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:01.283 Available Spare: 0% 00:08:01.283 Available Spare Threshold: 0% 00:08:01.283 Life Percentage Used: 0% 00:08:01.283 Data Units Read: 868 00:08:01.283 Data Units Written: 797 00:08:01.283 Host Read Commands: 39017 00:08:01.283 Host Write Commands: 38440 00:08:01.283 Controller Busy Time: 0 minutes 00:08:01.283 Power Cycles: 0 00:08:01.283 Power On Hours: 0 hours 00:08:01.283 Unsafe Shutdowns: 0 00:08:01.283 Unrecoverable Media Errors: 0 00:08:01.283 Lifetime Error Log Entries: 0 00:08:01.283 Warning Temperature Time: 0 minutes 00:08:01.283 Critical Temperature Time: 0 minutes 00:08:01.283 00:08:01.283 Number of Queues 00:08:01.283 ================ 00:08:01.283 Number of I/O Submission Queues: 64 00:08:01.283 Number of I/O Completion Queues: 64 00:08:01.283 00:08:01.283 ZNS Specific Controller Data 00:08:01.283 ============================ 00:08:01.283 Zone Append Size Limit: 0 00:08:01.283 00:08:01.283 00:08:01.283 Active Namespaces 00:08:01.283 ================= 00:08:01.283 Namespace ID:1 00:08:01.283 Error Recovery Timeout: Unlimited 00:08:01.283 Command Set Identifier: NVM (00h) 00:08:01.283 Deallocate: Supported 00:08:01.283 Deallocated/Unwritten Error: Supported 00:08:01.283 Deallocated Read Value: All 0x00 00:08:01.283 Deallocate in Write Zeroes: Not Supported 00:08:01.283 Deallocated Guard Field: 0xFFFF 00:08:01.283 Flush: Supported 00:08:01.283 Reservation: Not Supported 00:08:01.283 Namespace Sharing Capabilities: Multiple Controllers 00:08:01.283 Size (in LBAs): 262144 (1GiB) 00:08:01.283 Capacity (in LBAs): 262144 (1GiB) 00:08:01.283 Utilization (in LBAs): 262144 (1GiB) 00:08:01.283 Thin Provisioning: Not Supported 00:08:01.283 Per-NS Atomic Units: No 00:08:01.283 Maximum Single Source Range Length: 128 00:08:01.283 Maximum Copy Length: 128 00:08:01.283 Maximum Source Range Count: 128 00:08:01.283 NGUID/EUI64 Never Reused: No 00:08:01.283 Namespace Write Protected: No 00:08:01.283 Endurance group ID: 1 00:08:01.283 Number of LBA Formats: 8 00:08:01.283 Current LBA Format: LBA Format #04 00:08:01.283 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:01.283 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:01.283 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:01.283 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:01.283 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:01.283 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:01.283 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:01.283 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:01.283 00:08:01.283 Get Feature FDP: 00:08:01.283 ================ 00:08:01.283 Enabled: Yes 00:08:01.283 FDP configuration index: 0 00:08:01.283 00:08:01.283 FDP configurations log page 00:08:01.283 =========================== 00:08:01.283 Number of FDP configurations: 1 00:08:01.283 Version: 0 00:08:01.283 Size: 112 00:08:01.283 FDP Configuration Descriptor: 0 00:08:01.283 Descriptor Size: 96 00:08:01.283 Reclaim Group Identifier format: 2 00:08:01.283 FDP Volatile Write Cache: Not Present 00:08:01.283 FDP Configuration: Valid 00:08:01.283 Vendor Specific Size: 0 00:08:01.283 Number of Reclaim Groups: 2 00:08:01.283 Number of Recalim Unit Handles: 8 00:08:01.283 Max Placement Identifiers: 128 00:08:01.283 Number of Namespaces Suppprted: 256 00:08:01.283 Reclaim unit Nominal Size: 6000000 bytes 00:08:01.283 Estimated Reclaim Unit Time Limit: Not Reported 00:08:01.283 RUH Desc #000: RUH Type: Initially Isolated 00:08:01.283 RUH Desc #001: RUH Type: Initially Isolated 00:08:01.283 RUH Desc #002: RUH Type: Initially Isolated 00:08:01.283 RUH Desc #003: RUH Type: Initially Isolated 00:08:01.283 RUH Desc #004: RUH Type: Initially Isolated 00:08:01.283 RUH Desc #005: RUH Type: Initially Isolated 00:08:01.283 RUH Desc #006: RUH Type: Initially Isolated 00:08:01.283 RUH Desc #007: RUH Type: Initially Isolated 00:08:01.283 00:08:01.283 FDP reclaim unit handle usage log page 00:08:01.283 ====================================== 00:08:01.283 Number of Reclaim Unit Handles: 8 00:08:01.283 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:01.283 RUH Usage Desc #001: RUH Attributes: Unused 00:08:01.283 RUH Usage Desc #002: RUH Attributes: Unused 00:08:01.283 RUH Usage Desc #003: RUH Attributes: Unused 00:08:01.283 RUH Usage Desc #004: RUH Attributes: Unused 00:08:01.283 RUH Usage Desc #005: RUH Attributes: Unused 00:08:01.283 RUH Usage Desc #006: RUH Attributes: Unused 00:08:01.283 RUH Usage Desc #007: RUH Attributes: Unused 00:08:01.283 00:08:01.283 FDP statistics log page 00:08:01.283 ======================= 00:08:01.283 Host bytes with metadata written: 493985792 00:08:01.283 Media bytes with metadata written: 494039040 00:08:01.283 Media bytes erased: 0 00:08:01.283 00:08:01.283 FDP events log page 00:08:01.283 =================== 00:08:01.284 Number of FDP events: 0 00:08:01.284 00:08:01.284 NVM Specific Namespace Data 00:08:01.284 =========================== 00:08:01.284 Logical Block Storage Tag Mask: 0 00:08:01.284 Protection Information Capabilities: 00:08:01.284 16b Guard Protection Information Storage Tag Support: No 00:08:01.284 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:01.284 Storage Tag Check Read Support: No 00:08:01.284 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.284 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.284 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.284 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.284 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.284 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.284 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.284 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.284 00:08:01.284 real 0m1.351s 00:08:01.284 user 0m0.478s 00:08:01.284 sys 0m0.647s 00:08:01.284 16:34:46 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.284 16:34:46 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:01.284 ************************************ 00:08:01.284 END TEST nvme_identify 00:08:01.284 ************************************ 00:08:01.544 16:34:46 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:01.544 16:34:46 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:01.544 16:34:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.544 16:34:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:01.544 ************************************ 00:08:01.544 START TEST nvme_perf 00:08:01.544 ************************************ 00:08:01.544 16:34:46 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:08:01.544 16:34:46 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:02.932 Initializing NVMe Controllers 00:08:02.932 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:02.932 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:02.932 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:02.932 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:02.932 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:02.932 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:02.932 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:02.932 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:02.932 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:02.932 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:02.932 Initialization complete. Launching workers. 00:08:02.932 ======================================================== 00:08:02.932 Latency(us) 00:08:02.932 Device Information : IOPS MiB/s Average min max 00:08:02.932 PCIE (0000:00:10.0) NSID 1 from core 0: 6999.28 82.02 18326.03 13117.71 45406.18 00:08:02.932 PCIE (0000:00:11.0) NSID 1 from core 0: 6999.28 82.02 18296.06 12857.42 44254.10 00:08:02.932 PCIE (0000:00:13.0) NSID 1 from core 0: 6999.28 82.02 18262.81 11676.20 44465.13 00:08:02.932 PCIE (0000:00:12.0) NSID 1 from core 0: 6999.28 82.02 18229.09 11282.06 43663.30 00:08:02.932 PCIE (0000:00:12.0) NSID 2 from core 0: 6999.28 82.02 18196.44 10592.51 42428.63 00:08:02.932 PCIE (0000:00:12.0) NSID 3 from core 0: 7062.91 82.77 17998.45 10255.90 29877.02 00:08:02.932 ======================================================== 00:08:02.932 Total : 42059.30 492.88 18217.82 10255.90 45406.18 00:08:02.932 00:08:02.932 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:02.932 ================================================================================= 00:08:02.932 1.00000% : 15022.868us 00:08:02.932 10.00000% : 16031.114us 00:08:02.932 25.00000% : 16736.886us 00:08:02.932 50.00000% : 17745.132us 00:08:02.932 75.00000% : 19156.677us 00:08:02.932 90.00000% : 20669.046us 00:08:02.932 95.00000% : 21576.468us 00:08:02.932 98.00000% : 23391.311us 00:08:02.932 99.00000% : 35086.966us 00:08:02.932 99.50000% : 44362.831us 00:08:02.932 99.90000% : 45371.077us 00:08:02.932 99.99000% : 45572.726us 00:08:02.932 99.99900% : 45572.726us 00:08:02.932 99.99990% : 45572.726us 00:08:02.932 99.99999% : 45572.726us 00:08:02.932 00:08:02.932 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:02.932 ================================================================================= 00:08:02.932 1.00000% : 15123.692us 00:08:02.932 10.00000% : 16031.114us 00:08:02.932 25.00000% : 16736.886us 00:08:02.932 50.00000% : 17845.957us 00:08:02.932 75.00000% : 19257.502us 00:08:02.932 90.00000% : 20366.572us 00:08:02.932 95.00000% : 21374.818us 00:08:02.932 98.00000% : 23996.258us 00:08:02.932 99.00000% : 33473.772us 00:08:02.932 99.50000% : 43354.585us 00:08:02.932 99.90000% : 44161.182us 00:08:02.932 99.99000% : 44362.831us 00:08:02.932 99.99900% : 44362.831us 00:08:02.932 99.99990% : 44362.831us 00:08:02.933 99.99999% : 44362.831us 00:08:02.933 00:08:02.933 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:02.933 ================================================================================= 00:08:02.933 1.00000% : 14821.218us 00:08:02.933 10.00000% : 16031.114us 00:08:02.933 25.00000% : 16736.886us 00:08:02.933 50.00000% : 17745.132us 00:08:02.933 75.00000% : 19156.677us 00:08:02.933 90.00000% : 20769.871us 00:08:02.933 95.00000% : 21374.818us 00:08:02.933 98.00000% : 22786.363us 00:08:02.933 99.00000% : 33070.474us 00:08:02.933 99.50000% : 43556.234us 00:08:02.933 99.90000% : 44362.831us 00:08:02.933 99.99000% : 44564.480us 00:08:02.933 99.99900% : 44564.480us 00:08:02.933 99.99990% : 44564.480us 00:08:02.933 99.99999% : 44564.480us 00:08:02.933 00:08:02.933 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:02.933 ================================================================================= 00:08:02.933 1.00000% : 14821.218us 00:08:02.933 10.00000% : 16031.114us 00:08:02.933 25.00000% : 16636.062us 00:08:02.933 50.00000% : 17745.132us 00:08:02.933 75.00000% : 19055.852us 00:08:02.933 90.00000% : 20870.695us 00:08:02.933 95.00000% : 21576.468us 00:08:02.933 98.00000% : 22685.538us 00:08:02.933 99.00000% : 31255.631us 00:08:02.933 99.50000% : 42749.637us 00:08:02.933 99.90000% : 43556.234us 00:08:02.933 99.99000% : 43757.883us 00:08:02.933 99.99900% : 43757.883us 00:08:02.933 99.99990% : 43757.883us 00:08:02.933 99.99999% : 43757.883us 00:08:02.933 00:08:02.933 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:02.933 ================================================================================= 00:08:02.933 1.00000% : 14216.271us 00:08:02.933 10.00000% : 16031.114us 00:08:02.933 25.00000% : 16736.886us 00:08:02.933 50.00000% : 17745.132us 00:08:02.933 75.00000% : 18955.028us 00:08:02.933 90.00000% : 20870.695us 00:08:02.933 95.00000% : 21677.292us 00:08:02.933 98.00000% : 22584.714us 00:08:02.933 99.00000% : 29844.086us 00:08:02.933 99.50000% : 41539.742us 00:08:02.933 99.90000% : 42346.338us 00:08:02.933 99.99000% : 42547.988us 00:08:02.933 99.99900% : 42547.988us 00:08:02.933 99.99990% : 42547.988us 00:08:02.933 99.99999% : 42547.988us 00:08:02.933 00:08:02.933 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:02.933 ================================================================================= 00:08:02.933 1.00000% : 14115.446us 00:08:02.933 10.00000% : 16031.114us 00:08:02.933 25.00000% : 16736.886us 00:08:02.933 50.00000% : 17644.308us 00:08:02.933 75.00000% : 19055.852us 00:08:02.933 90.00000% : 20769.871us 00:08:02.933 95.00000% : 21475.643us 00:08:02.933 98.00000% : 22181.415us 00:08:02.933 99.00000% : 23189.662us 00:08:02.933 99.50000% : 29037.489us 00:08:02.933 99.90000% : 29844.086us 00:08:02.933 99.99000% : 30045.735us 00:08:02.933 99.99900% : 30045.735us 00:08:02.933 99.99990% : 30045.735us 00:08:02.933 99.99999% : 30045.735us 00:08:02.933 00:08:02.933 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:02.933 ============================================================================== 00:08:02.933 Range in us Cumulative IO count 00:08:02.933 13107.200 - 13208.025: 0.0710% ( 5) 00:08:02.933 13208.025 - 13308.849: 0.1278% ( 4) 00:08:02.933 13308.849 - 13409.674: 0.1705% ( 3) 00:08:02.933 13409.674 - 13510.498: 0.2415% ( 5) 00:08:02.933 13510.498 - 13611.323: 0.2841% ( 3) 00:08:02.933 13611.323 - 13712.148: 0.3835% ( 7) 00:08:02.933 13712.148 - 13812.972: 0.4403% ( 4) 00:08:02.933 13812.972 - 13913.797: 0.4830% ( 3) 00:08:02.933 13913.797 - 14014.622: 0.5398% ( 4) 00:08:02.933 14014.622 - 14115.446: 0.5824% ( 3) 00:08:02.933 14115.446 - 14216.271: 0.6534% ( 5) 00:08:02.933 14216.271 - 14317.095: 0.7244% ( 5) 00:08:02.933 14317.095 - 14417.920: 0.7670% ( 3) 00:08:02.933 14417.920 - 14518.745: 0.8381% ( 5) 00:08:02.933 14518.745 - 14619.569: 0.8949% ( 4) 00:08:02.933 14619.569 - 14720.394: 0.9091% ( 1) 00:08:02.933 14821.218 - 14922.043: 0.9801% ( 5) 00:08:02.933 14922.043 - 15022.868: 1.0227% ( 3) 00:08:02.933 15022.868 - 15123.692: 1.0938% ( 5) 00:08:02.933 15123.692 - 15224.517: 1.3352% ( 17) 00:08:02.933 15224.517 - 15325.342: 1.7472% ( 29) 00:08:02.933 15325.342 - 15426.166: 2.2585% ( 36) 00:08:02.933 15426.166 - 15526.991: 3.0256% ( 54) 00:08:02.933 15526.991 - 15627.815: 4.0909% ( 75) 00:08:02.933 15627.815 - 15728.640: 5.6108% ( 107) 00:08:02.933 15728.640 - 15829.465: 7.0028% ( 98) 00:08:02.933 15829.465 - 15930.289: 8.6506% ( 116) 00:08:02.933 15930.289 - 16031.114: 10.5824% ( 136) 00:08:02.933 16031.114 - 16131.938: 12.3580% ( 125) 00:08:02.933 16131.938 - 16232.763: 14.6023% ( 158) 00:08:02.933 16232.763 - 16333.588: 16.9744% ( 167) 00:08:02.933 16333.588 - 16434.412: 19.6591% ( 189) 00:08:02.933 16434.412 - 16535.237: 22.1023% ( 172) 00:08:02.933 16535.237 - 16636.062: 24.3324% ( 157) 00:08:02.933 16636.062 - 16736.886: 27.0170% ( 189) 00:08:02.933 16736.886 - 16837.711: 29.6307% ( 184) 00:08:02.933 16837.711 - 16938.535: 32.0170% ( 168) 00:08:02.933 16938.535 - 17039.360: 34.2045% ( 154) 00:08:02.933 17039.360 - 17140.185: 36.4347% ( 157) 00:08:02.933 17140.185 - 17241.009: 39.1051% ( 188) 00:08:02.933 17241.009 - 17341.834: 41.1506% ( 144) 00:08:02.933 17341.834 - 17442.658: 43.5369% ( 168) 00:08:02.933 17442.658 - 17543.483: 45.8381% ( 162) 00:08:02.933 17543.483 - 17644.308: 48.1108% ( 160) 00:08:02.933 17644.308 - 17745.132: 50.4688% ( 166) 00:08:02.933 17745.132 - 17845.957: 52.9545% ( 175) 00:08:02.933 17845.957 - 17946.782: 55.3267% ( 167) 00:08:02.933 17946.782 - 18047.606: 57.2159% ( 133) 00:08:02.933 18047.606 - 18148.431: 59.5597% ( 165) 00:08:02.933 18148.431 - 18249.255: 61.2926% ( 122) 00:08:02.933 18249.255 - 18350.080: 63.4659% ( 153) 00:08:02.933 18350.080 - 18450.905: 65.0284% ( 110) 00:08:02.933 18450.905 - 18551.729: 66.3778% ( 95) 00:08:02.933 18551.729 - 18652.554: 68.0540% ( 118) 00:08:02.933 18652.554 - 18753.378: 69.5455% ( 105) 00:08:02.933 18753.378 - 18854.203: 71.0227% ( 104) 00:08:02.933 18854.203 - 18955.028: 72.5852% ( 110) 00:08:02.933 18955.028 - 19055.852: 74.0199% ( 101) 00:08:02.933 19055.852 - 19156.677: 75.4261% ( 99) 00:08:02.933 19156.677 - 19257.502: 76.8040% ( 97) 00:08:02.933 19257.502 - 19358.326: 78.1818% ( 97) 00:08:02.933 19358.326 - 19459.151: 79.2898% ( 78) 00:08:02.933 19459.151 - 19559.975: 80.5824% ( 91) 00:08:02.933 19559.975 - 19660.800: 81.6903% ( 78) 00:08:02.933 19660.800 - 19761.625: 82.7273% ( 73) 00:08:02.933 19761.625 - 19862.449: 83.7642% ( 73) 00:08:02.933 19862.449 - 19963.274: 84.9716% ( 85) 00:08:02.933 19963.274 - 20064.098: 86.0511% ( 76) 00:08:02.933 20064.098 - 20164.923: 87.0597% ( 71) 00:08:02.933 20164.923 - 20265.748: 87.7983% ( 52) 00:08:02.933 20265.748 - 20366.572: 88.5938% ( 56) 00:08:02.933 20366.572 - 20467.397: 89.1051% ( 36) 00:08:02.933 20467.397 - 20568.222: 89.7443% ( 45) 00:08:02.933 20568.222 - 20669.046: 90.4545% ( 50) 00:08:02.933 20669.046 - 20769.871: 91.1222% ( 47) 00:08:02.933 20769.871 - 20870.695: 91.5909% ( 33) 00:08:02.933 20870.695 - 20971.520: 92.1449% ( 39) 00:08:02.933 20971.520 - 21072.345: 92.6420% ( 35) 00:08:02.933 21072.345 - 21173.169: 93.1392% ( 35) 00:08:02.933 21173.169 - 21273.994: 93.6080% ( 33) 00:08:02.933 21273.994 - 21374.818: 94.0341% ( 30) 00:08:02.933 21374.818 - 21475.643: 94.4460% ( 29) 00:08:02.933 21475.643 - 21576.468: 95.0142% ( 40) 00:08:02.933 21576.468 - 21677.292: 95.4972% ( 34) 00:08:02.933 21677.292 - 21778.117: 95.8523% ( 25) 00:08:02.933 21778.117 - 21878.942: 96.2358% ( 27) 00:08:02.933 21878.942 - 21979.766: 96.5199% ( 20) 00:08:02.933 21979.766 - 22080.591: 96.7614% ( 17) 00:08:02.933 22080.591 - 22181.415: 96.9176% ( 11) 00:08:02.933 22181.415 - 22282.240: 97.0739% ( 11) 00:08:02.933 22282.240 - 22383.065: 97.2443% ( 12) 00:08:02.933 22383.065 - 22483.889: 97.3864% ( 10) 00:08:02.933 22483.889 - 22584.714: 97.5710% ( 13) 00:08:02.933 22584.714 - 22685.538: 97.6278% ( 4) 00:08:02.933 22685.538 - 22786.363: 97.7557% ( 9) 00:08:02.933 22786.363 - 22887.188: 97.7983% ( 3) 00:08:02.933 22887.188 - 22988.012: 97.8693% ( 5) 00:08:02.933 22988.012 - 23088.837: 97.8977% ( 2) 00:08:02.933 23088.837 - 23189.662: 97.9403% ( 3) 00:08:02.933 23189.662 - 23290.486: 97.9688% ( 2) 00:08:02.933 23290.486 - 23391.311: 98.0114% ( 3) 00:08:02.933 23391.311 - 23492.135: 98.0398% ( 2) 00:08:02.933 23492.135 - 23592.960: 98.0824% ( 3) 00:08:02.933 23592.960 - 23693.785: 98.1392% ( 4) 00:08:02.933 23693.785 - 23794.609: 98.1676% ( 2) 00:08:02.933 23794.609 - 23895.434: 98.1818% ( 1) 00:08:02.933 33070.474 - 33272.123: 98.2386% ( 4) 00:08:02.933 33272.123 - 33473.772: 98.3097% ( 5) 00:08:02.933 33473.772 - 33675.422: 98.4233% ( 8) 00:08:02.933 33675.422 - 33877.071: 98.4943% ( 5) 00:08:02.933 33877.071 - 34078.720: 98.5938% ( 7) 00:08:02.933 34078.720 - 34280.369: 98.6790% ( 6) 00:08:02.933 34280.369 - 34482.018: 98.7784% ( 7) 00:08:02.933 34482.018 - 34683.668: 98.8636% ( 6) 00:08:02.933 34683.668 - 34885.317: 98.9631% ( 7) 00:08:02.933 34885.317 - 35086.966: 99.0767% ( 8) 00:08:02.933 35086.966 - 35288.615: 99.0909% ( 1) 00:08:02.933 43354.585 - 43556.234: 99.1193% ( 2) 00:08:02.934 43556.234 - 43757.883: 99.2188% ( 7) 00:08:02.934 43757.883 - 43959.532: 99.3040% ( 6) 00:08:02.934 43959.532 - 44161.182: 99.4034% ( 7) 00:08:02.934 44161.182 - 44362.831: 99.5028% ( 7) 00:08:02.934 44362.831 - 44564.480: 99.6023% ( 7) 00:08:02.934 44564.480 - 44766.129: 99.6875% ( 6) 00:08:02.934 44766.129 - 44967.778: 99.7869% ( 7) 00:08:02.934 44967.778 - 45169.428: 99.8864% ( 7) 00:08:02.934 45169.428 - 45371.077: 99.9858% ( 7) 00:08:02.934 45371.077 - 45572.726: 100.0000% ( 1) 00:08:02.934 00:08:02.934 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:02.934 ============================================================================== 00:08:02.934 Range in us Cumulative IO count 00:08:02.934 12855.138 - 12905.551: 0.0426% ( 3) 00:08:02.934 12905.551 - 13006.375: 0.0994% ( 4) 00:08:02.934 13006.375 - 13107.200: 0.1705% ( 5) 00:08:02.934 13107.200 - 13208.025: 0.2415% ( 5) 00:08:02.934 13208.025 - 13308.849: 0.3125% ( 5) 00:08:02.934 13308.849 - 13409.674: 0.3693% ( 4) 00:08:02.934 13409.674 - 13510.498: 0.4403% ( 5) 00:08:02.934 13510.498 - 13611.323: 0.4972% ( 4) 00:08:02.934 13611.323 - 13712.148: 0.5682% ( 5) 00:08:02.934 13712.148 - 13812.972: 0.6250% ( 4) 00:08:02.934 13812.972 - 13913.797: 0.6960% ( 5) 00:08:02.934 13913.797 - 14014.622: 0.7670% ( 5) 00:08:02.934 14014.622 - 14115.446: 0.8381% ( 5) 00:08:02.934 14115.446 - 14216.271: 0.8949% ( 4) 00:08:02.934 14216.271 - 14317.095: 0.9091% ( 1) 00:08:02.934 14922.043 - 15022.868: 0.9375% ( 2) 00:08:02.934 15022.868 - 15123.692: 1.0227% ( 6) 00:08:02.934 15123.692 - 15224.517: 1.2216% ( 14) 00:08:02.934 15224.517 - 15325.342: 1.4347% ( 15) 00:08:02.934 15325.342 - 15426.166: 1.8324% ( 28) 00:08:02.934 15426.166 - 15526.991: 2.5994% ( 54) 00:08:02.934 15526.991 - 15627.815: 3.7642% ( 82) 00:08:02.934 15627.815 - 15728.640: 5.2699% ( 106) 00:08:02.934 15728.640 - 15829.465: 7.0312% ( 124) 00:08:02.934 15829.465 - 15930.289: 8.7784% ( 123) 00:08:02.934 15930.289 - 16031.114: 10.6676% ( 133) 00:08:02.934 16031.114 - 16131.938: 12.8551% ( 154) 00:08:02.934 16131.938 - 16232.763: 15.1847% ( 164) 00:08:02.934 16232.763 - 16333.588: 17.6989% ( 177) 00:08:02.934 16333.588 - 16434.412: 20.0852% ( 168) 00:08:02.934 16434.412 - 16535.237: 22.3438% ( 159) 00:08:02.934 16535.237 - 16636.062: 24.8153% ( 174) 00:08:02.934 16636.062 - 16736.886: 27.1733% ( 166) 00:08:02.934 16736.886 - 16837.711: 29.4034% ( 157) 00:08:02.934 16837.711 - 16938.535: 31.7614% ( 166) 00:08:02.934 16938.535 - 17039.360: 34.2756% ( 177) 00:08:02.934 17039.360 - 17140.185: 36.4915% ( 156) 00:08:02.934 17140.185 - 17241.009: 38.9631% ( 174) 00:08:02.934 17241.009 - 17341.834: 41.1790% ( 156) 00:08:02.934 17341.834 - 17442.658: 43.2102% ( 143) 00:08:02.934 17442.658 - 17543.483: 45.0426% ( 129) 00:08:02.934 17543.483 - 17644.308: 47.1733% ( 150) 00:08:02.934 17644.308 - 17745.132: 49.3608% ( 154) 00:08:02.934 17745.132 - 17845.957: 51.6051% ( 158) 00:08:02.934 17845.957 - 17946.782: 54.0057% ( 169) 00:08:02.934 17946.782 - 18047.606: 56.2500% ( 158) 00:08:02.934 18047.606 - 18148.431: 58.3523% ( 148) 00:08:02.934 18148.431 - 18249.255: 60.4403% ( 147) 00:08:02.934 18249.255 - 18350.080: 62.4148% ( 139) 00:08:02.934 18350.080 - 18450.905: 64.3466% ( 136) 00:08:02.934 18450.905 - 18551.729: 66.0653% ( 121) 00:08:02.934 18551.729 - 18652.554: 67.7841% ( 121) 00:08:02.934 18652.554 - 18753.378: 69.3466% ( 110) 00:08:02.934 18753.378 - 18854.203: 70.6676% ( 93) 00:08:02.934 18854.203 - 18955.028: 71.9602% ( 91) 00:08:02.934 18955.028 - 19055.852: 73.2386% ( 90) 00:08:02.934 19055.852 - 19156.677: 74.4886% ( 88) 00:08:02.934 19156.677 - 19257.502: 75.8381% ( 95) 00:08:02.934 19257.502 - 19358.326: 77.3295% ( 105) 00:08:02.934 19358.326 - 19459.151: 78.8068% ( 104) 00:08:02.934 19459.151 - 19559.975: 80.1136% ( 92) 00:08:02.934 19559.975 - 19660.800: 81.5341% ( 100) 00:08:02.934 19660.800 - 19761.625: 82.9688% ( 101) 00:08:02.934 19761.625 - 19862.449: 84.3466% ( 97) 00:08:02.934 19862.449 - 19963.274: 85.7812% ( 101) 00:08:02.934 19963.274 - 20064.098: 86.9744% ( 84) 00:08:02.934 20064.098 - 20164.923: 88.1250% ( 81) 00:08:02.934 20164.923 - 20265.748: 89.2188% ( 77) 00:08:02.934 20265.748 - 20366.572: 90.2131% ( 70) 00:08:02.934 20366.572 - 20467.397: 91.0795% ( 61) 00:08:02.934 20467.397 - 20568.222: 91.7756% ( 49) 00:08:02.934 20568.222 - 20669.046: 92.3722% ( 42) 00:08:02.934 20669.046 - 20769.871: 92.8835% ( 36) 00:08:02.934 20769.871 - 20870.695: 93.4091% ( 37) 00:08:02.934 20870.695 - 20971.520: 93.7784% ( 26) 00:08:02.934 20971.520 - 21072.345: 94.1619% ( 27) 00:08:02.934 21072.345 - 21173.169: 94.5739% ( 29) 00:08:02.934 21173.169 - 21273.994: 94.9858% ( 29) 00:08:02.934 21273.994 - 21374.818: 95.3409% ( 25) 00:08:02.934 21374.818 - 21475.643: 95.5540% ( 15) 00:08:02.934 21475.643 - 21576.468: 95.7102% ( 11) 00:08:02.934 21576.468 - 21677.292: 95.8949% ( 13) 00:08:02.934 21677.292 - 21778.117: 96.0511% ( 11) 00:08:02.934 21778.117 - 21878.942: 96.2074% ( 11) 00:08:02.934 21878.942 - 21979.766: 96.3636% ( 11) 00:08:02.934 21979.766 - 22080.591: 96.5341% ( 12) 00:08:02.934 22080.591 - 22181.415: 96.6761% ( 10) 00:08:02.934 22181.415 - 22282.240: 96.8608% ( 13) 00:08:02.934 22282.240 - 22383.065: 97.0312% ( 12) 00:08:02.934 22383.065 - 22483.889: 97.1165% ( 6) 00:08:02.934 22483.889 - 22584.714: 97.2159% ( 7) 00:08:02.934 22584.714 - 22685.538: 97.3295% ( 8) 00:08:02.934 22685.538 - 22786.363: 97.4148% ( 6) 00:08:02.934 22786.363 - 22887.188: 97.4574% ( 3) 00:08:02.934 22887.188 - 22988.012: 97.5000% ( 3) 00:08:02.934 22988.012 - 23088.837: 97.5426% ( 3) 00:08:02.934 23088.837 - 23189.662: 97.5852% ( 3) 00:08:02.934 23189.662 - 23290.486: 97.6420% ( 4) 00:08:02.934 23290.486 - 23391.311: 97.6989% ( 4) 00:08:02.934 23391.311 - 23492.135: 97.7557% ( 4) 00:08:02.934 23492.135 - 23592.960: 97.7983% ( 3) 00:08:02.934 23592.960 - 23693.785: 97.8551% ( 4) 00:08:02.934 23693.785 - 23794.609: 97.9119% ( 4) 00:08:02.934 23794.609 - 23895.434: 97.9545% ( 3) 00:08:02.934 23895.434 - 23996.258: 98.0114% ( 4) 00:08:02.934 23996.258 - 24097.083: 98.0540% ( 3) 00:08:02.934 24097.083 - 24197.908: 98.1108% ( 4) 00:08:02.934 24197.908 - 24298.732: 98.1676% ( 4) 00:08:02.934 24298.732 - 24399.557: 98.1818% ( 1) 00:08:02.934 31658.929 - 31860.578: 98.2386% ( 4) 00:08:02.934 31860.578 - 32062.228: 98.3381% ( 7) 00:08:02.934 32062.228 - 32263.877: 98.4375% ( 7) 00:08:02.934 32263.877 - 32465.526: 98.5511% ( 8) 00:08:02.934 32465.526 - 32667.175: 98.6506% ( 7) 00:08:02.934 32667.175 - 32868.825: 98.7358% ( 6) 00:08:02.934 32868.825 - 33070.474: 98.8352% ( 7) 00:08:02.935 33070.474 - 33272.123: 98.9347% ( 7) 00:08:02.935 33272.123 - 33473.772: 99.0483% ( 8) 00:08:02.935 33473.772 - 33675.422: 99.0909% ( 3) 00:08:02.935 42346.338 - 42547.988: 99.1619% ( 5) 00:08:02.935 42547.988 - 42749.637: 99.2614% ( 7) 00:08:02.935 42749.637 - 42951.286: 99.3324% ( 5) 00:08:02.935 42951.286 - 43152.935: 99.4176% ( 6) 00:08:02.935 43152.935 - 43354.585: 99.5312% ( 8) 00:08:02.935 43354.585 - 43556.234: 99.6307% ( 7) 00:08:02.935 43556.234 - 43757.883: 99.7443% ( 8) 00:08:02.935 43757.883 - 43959.532: 99.8438% ( 7) 00:08:02.935 43959.532 - 44161.182: 99.9432% ( 7) 00:08:02.935 44161.182 - 44362.831: 100.0000% ( 4) 00:08:02.935 00:08:02.935 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:02.935 ============================================================================== 00:08:02.935 Range in us Cumulative IO count 00:08:02.935 11645.243 - 11695.655: 0.0142% ( 1) 00:08:02.935 11695.655 - 11746.068: 0.0710% ( 4) 00:08:02.935 11746.068 - 11796.480: 0.1136% ( 3) 00:08:02.935 11796.480 - 11846.892: 0.1562% ( 3) 00:08:02.935 11846.892 - 11897.305: 0.1847% ( 2) 00:08:02.935 11897.305 - 11947.717: 0.2131% ( 2) 00:08:02.935 11947.717 - 11998.129: 0.2273% ( 1) 00:08:02.935 11998.129 - 12048.542: 0.2557% ( 2) 00:08:02.935 12048.542 - 12098.954: 0.2841% ( 2) 00:08:02.935 12098.954 - 12149.366: 0.3267% ( 3) 00:08:02.935 12149.366 - 12199.778: 0.3551% ( 2) 00:08:02.935 12199.778 - 12250.191: 0.3977% ( 3) 00:08:02.935 12250.191 - 12300.603: 0.4261% ( 2) 00:08:02.935 12300.603 - 12351.015: 0.4545% ( 2) 00:08:02.935 12351.015 - 12401.428: 0.4972% ( 3) 00:08:02.935 12401.428 - 12451.840: 0.5256% ( 2) 00:08:02.935 12451.840 - 12502.252: 0.5682% ( 3) 00:08:02.935 12502.252 - 12552.665: 0.5966% ( 2) 00:08:02.935 12552.665 - 12603.077: 0.6392% ( 3) 00:08:02.935 12603.077 - 12653.489: 0.6676% ( 2) 00:08:02.935 12653.489 - 12703.902: 0.7102% ( 3) 00:08:02.935 12703.902 - 12754.314: 0.7386% ( 2) 00:08:02.935 12754.314 - 12804.726: 0.7812% ( 3) 00:08:02.935 12804.726 - 12855.138: 0.8097% ( 2) 00:08:02.935 12855.138 - 12905.551: 0.8381% ( 2) 00:08:02.935 12905.551 - 13006.375: 0.9091% ( 5) 00:08:02.935 14518.745 - 14619.569: 0.9375% ( 2) 00:08:02.935 14619.569 - 14720.394: 0.9943% ( 4) 00:08:02.935 14720.394 - 14821.218: 1.0511% ( 4) 00:08:02.935 14821.218 - 14922.043: 1.1080% ( 4) 00:08:02.935 14922.043 - 15022.868: 1.3068% ( 14) 00:08:02.935 15022.868 - 15123.692: 1.4489% ( 10) 00:08:02.935 15123.692 - 15224.517: 1.7614% ( 22) 00:08:02.935 15224.517 - 15325.342: 2.1875% ( 30) 00:08:02.935 15325.342 - 15426.166: 2.9261% ( 52) 00:08:02.935 15426.166 - 15526.991: 3.8068% ( 62) 00:08:02.935 15526.991 - 15627.815: 4.8438% ( 73) 00:08:02.935 15627.815 - 15728.640: 6.0653% ( 86) 00:08:02.935 15728.640 - 15829.465: 7.5426% ( 104) 00:08:02.935 15829.465 - 15930.289: 9.1335% ( 112) 00:08:02.935 15930.289 - 16031.114: 10.8239% ( 119) 00:08:02.935 16031.114 - 16131.938: 12.6562% ( 129) 00:08:02.935 16131.938 - 16232.763: 14.7727% ( 149) 00:08:02.935 16232.763 - 16333.588: 17.1307% ( 166) 00:08:02.935 16333.588 - 16434.412: 19.5170% ( 168) 00:08:02.935 16434.412 - 16535.237: 22.0455% ( 178) 00:08:02.935 16535.237 - 16636.062: 24.2472% ( 155) 00:08:02.935 16636.062 - 16736.886: 26.7188% ( 174) 00:08:02.935 16736.886 - 16837.711: 29.5028% ( 196) 00:08:02.935 16837.711 - 16938.535: 32.2443% ( 193) 00:08:02.935 16938.535 - 17039.360: 34.6875% ( 172) 00:08:02.935 17039.360 - 17140.185: 37.2159% ( 178) 00:08:02.935 17140.185 - 17241.009: 39.7869% ( 181) 00:08:02.935 17241.009 - 17341.834: 42.1023% ( 163) 00:08:02.935 17341.834 - 17442.658: 44.2472% ( 151) 00:08:02.935 17442.658 - 17543.483: 46.3778% ( 150) 00:08:02.935 17543.483 - 17644.308: 48.3665% ( 140) 00:08:02.935 17644.308 - 17745.132: 50.6534% ( 161) 00:08:02.935 17745.132 - 17845.957: 52.9972% ( 165) 00:08:02.935 17845.957 - 17946.782: 55.2273% ( 157) 00:08:02.935 17946.782 - 18047.606: 57.5426% ( 163) 00:08:02.935 18047.606 - 18148.431: 59.8153% ( 160) 00:08:02.935 18148.431 - 18249.255: 61.7188% ( 134) 00:08:02.935 18249.255 - 18350.080: 63.6932% ( 139) 00:08:02.935 18350.080 - 18450.905: 65.5966% ( 134) 00:08:02.935 18450.905 - 18551.729: 67.3011% ( 120) 00:08:02.935 18551.729 - 18652.554: 68.8210% ( 107) 00:08:02.935 18652.554 - 18753.378: 70.3977% ( 111) 00:08:02.935 18753.378 - 18854.203: 71.9034% ( 106) 00:08:02.935 18854.203 - 18955.028: 73.2812% ( 97) 00:08:02.935 18955.028 - 19055.852: 74.6449% ( 96) 00:08:02.935 19055.852 - 19156.677: 75.8807% ( 87) 00:08:02.935 19156.677 - 19257.502: 77.1023% ( 86) 00:08:02.935 19257.502 - 19358.326: 78.1250% ( 72) 00:08:02.935 19358.326 - 19459.151: 79.1761% ( 74) 00:08:02.935 19459.151 - 19559.975: 80.0426% ( 61) 00:08:02.935 19559.975 - 19660.800: 81.0511% ( 71) 00:08:02.935 19660.800 - 19761.625: 81.9886% ( 66) 00:08:02.935 19761.625 - 19862.449: 82.8977% ( 64) 00:08:02.935 19862.449 - 19963.274: 83.7500% ( 60) 00:08:02.935 19963.274 - 20064.098: 84.7301% ( 69) 00:08:02.935 20064.098 - 20164.923: 85.5682% ( 59) 00:08:02.935 20164.923 - 20265.748: 86.4631% ( 63) 00:08:02.935 20265.748 - 20366.572: 87.4432% ( 69) 00:08:02.935 20366.572 - 20467.397: 88.3239% ( 62) 00:08:02.935 20467.397 - 20568.222: 89.1477% ( 58) 00:08:02.935 20568.222 - 20669.046: 89.9574% ( 57) 00:08:02.935 20669.046 - 20769.871: 90.7812% ( 58) 00:08:02.935 20769.871 - 20870.695: 91.5625% ( 55) 00:08:02.935 20870.695 - 20971.520: 92.3438% ( 55) 00:08:02.935 20971.520 - 21072.345: 93.2244% ( 62) 00:08:02.935 21072.345 - 21173.169: 93.9631% ( 52) 00:08:02.935 21173.169 - 21273.994: 94.6733% ( 50) 00:08:02.935 21273.994 - 21374.818: 95.2699% ( 42) 00:08:02.935 21374.818 - 21475.643: 95.7670% ( 35) 00:08:02.935 21475.643 - 21576.468: 96.1364% ( 26) 00:08:02.935 21576.468 - 21677.292: 96.4773% ( 24) 00:08:02.935 21677.292 - 21778.117: 96.7472% ( 19) 00:08:02.935 21778.117 - 21878.942: 96.9744% ( 16) 00:08:02.935 21878.942 - 21979.766: 97.2301% ( 18) 00:08:02.935 21979.766 - 22080.591: 97.4006% ( 12) 00:08:02.935 22080.591 - 22181.415: 97.5852% ( 13) 00:08:02.935 22181.415 - 22282.240: 97.7273% ( 10) 00:08:02.935 22282.240 - 22383.065: 97.8267% ( 7) 00:08:02.935 22383.065 - 22483.889: 97.8835% ( 4) 00:08:02.935 22483.889 - 22584.714: 97.9261% ( 3) 00:08:02.935 22584.714 - 22685.538: 97.9688% ( 3) 00:08:02.935 22685.538 - 22786.363: 98.0256% ( 4) 00:08:02.935 22786.363 - 22887.188: 98.0540% ( 2) 00:08:02.935 22887.188 - 22988.012: 98.0966% ( 3) 00:08:02.935 22988.012 - 23088.837: 98.1392% ( 3) 00:08:02.935 23088.837 - 23189.662: 98.1818% ( 3) 00:08:02.935 31053.982 - 31255.631: 98.2386% ( 4) 00:08:02.935 31255.631 - 31457.280: 98.3523% ( 8) 00:08:02.935 31457.280 - 31658.929: 98.4517% ( 7) 00:08:02.935 31658.929 - 31860.578: 98.5511% ( 7) 00:08:02.935 31860.578 - 32062.228: 98.6364% ( 6) 00:08:02.935 32062.228 - 32263.877: 98.7358% ( 7) 00:08:02.935 32263.877 - 32465.526: 98.8352% ( 7) 00:08:02.935 32465.526 - 32667.175: 98.9062% ( 5) 00:08:02.935 32667.175 - 32868.825: 98.9915% ( 6) 00:08:02.935 32868.825 - 33070.474: 99.0909% ( 7) 00:08:02.935 42346.338 - 42547.988: 99.1051% ( 1) 00:08:02.935 42547.988 - 42749.637: 99.1903% ( 6) 00:08:02.935 42749.637 - 42951.286: 99.2756% ( 6) 00:08:02.935 42951.286 - 43152.935: 99.3608% ( 6) 00:08:02.935 43152.935 - 43354.585: 99.4602% ( 7) 00:08:02.935 43354.585 - 43556.234: 99.5597% ( 7) 00:08:02.935 43556.234 - 43757.883: 99.6449% ( 6) 00:08:02.935 43757.883 - 43959.532: 99.7443% ( 7) 00:08:02.935 43959.532 - 44161.182: 99.8438% ( 7) 00:08:02.935 44161.182 - 44362.831: 99.9432% ( 7) 00:08:02.935 44362.831 - 44564.480: 100.0000% ( 4) 00:08:02.935 00:08:02.935 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:02.935 ============================================================================== 00:08:02.935 Range in us Cumulative IO count 00:08:02.935 11241.945 - 11292.357: 0.0142% ( 1) 00:08:02.935 11292.357 - 11342.769: 0.0426% ( 2) 00:08:02.935 11342.769 - 11393.182: 0.0994% ( 4) 00:08:02.935 11393.182 - 11443.594: 0.1562% ( 4) 00:08:02.935 11443.594 - 11494.006: 0.1847% ( 2) 00:08:02.935 11494.006 - 11544.418: 0.2131% ( 2) 00:08:02.935 11544.418 - 11594.831: 0.2273% ( 1) 00:08:02.935 11594.831 - 11645.243: 0.2557% ( 2) 00:08:02.935 11645.243 - 11695.655: 0.2841% ( 2) 00:08:02.935 11695.655 - 11746.068: 0.3125% ( 2) 00:08:02.935 11746.068 - 11796.480: 0.3551% ( 3) 00:08:02.935 11796.480 - 11846.892: 0.3977% ( 3) 00:08:02.935 11846.892 - 11897.305: 0.4261% ( 2) 00:08:02.935 11897.305 - 11947.717: 0.4545% ( 2) 00:08:02.935 11947.717 - 11998.129: 0.4830% ( 2) 00:08:02.935 11998.129 - 12048.542: 0.5114% ( 2) 00:08:02.935 12048.542 - 12098.954: 0.5398% ( 2) 00:08:02.936 12098.954 - 12149.366: 0.5824% ( 3) 00:08:02.936 12149.366 - 12199.778: 0.5966% ( 1) 00:08:02.936 12199.778 - 12250.191: 0.6392% ( 3) 00:08:02.936 12250.191 - 12300.603: 0.6676% ( 2) 00:08:02.936 12300.603 - 12351.015: 0.7102% ( 3) 00:08:02.936 12351.015 - 12401.428: 0.7386% ( 2) 00:08:02.936 12401.428 - 12451.840: 0.7812% ( 3) 00:08:02.936 12451.840 - 12502.252: 0.8097% ( 2) 00:08:02.936 12502.252 - 12552.665: 0.8239% ( 1) 00:08:02.936 12552.665 - 12603.077: 0.8665% ( 3) 00:08:02.936 12603.077 - 12653.489: 0.8949% ( 2) 00:08:02.936 12653.489 - 12703.902: 0.9091% ( 1) 00:08:02.936 14619.569 - 14720.394: 0.9375% ( 2) 00:08:02.936 14720.394 - 14821.218: 1.0085% ( 5) 00:08:02.936 14821.218 - 14922.043: 1.2074% ( 14) 00:08:02.936 14922.043 - 15022.868: 1.4205% ( 15) 00:08:02.936 15022.868 - 15123.692: 1.6761% ( 18) 00:08:02.936 15123.692 - 15224.517: 1.9602% ( 20) 00:08:02.936 15224.517 - 15325.342: 2.4006% ( 31) 00:08:02.936 15325.342 - 15426.166: 3.0540% ( 46) 00:08:02.936 15426.166 - 15526.991: 3.8920% ( 59) 00:08:02.936 15526.991 - 15627.815: 4.9432% ( 74) 00:08:02.936 15627.815 - 15728.640: 6.2074% ( 89) 00:08:02.936 15728.640 - 15829.465: 7.8835% ( 118) 00:08:02.936 15829.465 - 15930.289: 9.6307% ( 123) 00:08:02.936 15930.289 - 16031.114: 11.7188% ( 147) 00:08:02.936 16031.114 - 16131.938: 13.6080% ( 133) 00:08:02.936 16131.938 - 16232.763: 15.8523% ( 158) 00:08:02.936 16232.763 - 16333.588: 18.3807% ( 178) 00:08:02.936 16333.588 - 16434.412: 20.8807% ( 176) 00:08:02.936 16434.412 - 16535.237: 23.7074% ( 199) 00:08:02.936 16535.237 - 16636.062: 26.3636% ( 187) 00:08:02.936 16636.062 - 16736.886: 29.0909% ( 192) 00:08:02.936 16736.886 - 16837.711: 31.6761% ( 182) 00:08:02.936 16837.711 - 16938.535: 33.9489% ( 160) 00:08:02.936 16938.535 - 17039.360: 35.8239% ( 132) 00:08:02.936 17039.360 - 17140.185: 37.7415% ( 135) 00:08:02.936 17140.185 - 17241.009: 39.8011% ( 145) 00:08:02.936 17241.009 - 17341.834: 42.0455% ( 158) 00:08:02.936 17341.834 - 17442.658: 44.2614% ( 156) 00:08:02.936 17442.658 - 17543.483: 46.4347% ( 153) 00:08:02.936 17543.483 - 17644.308: 48.6932% ( 159) 00:08:02.936 17644.308 - 17745.132: 50.8807% ( 154) 00:08:02.936 17745.132 - 17845.957: 52.8551% ( 139) 00:08:02.936 17845.957 - 17946.782: 54.7159% ( 131) 00:08:02.936 17946.782 - 18047.606: 56.5909% ( 132) 00:08:02.936 18047.606 - 18148.431: 58.6080% ( 142) 00:08:02.936 18148.431 - 18249.255: 60.6818% ( 146) 00:08:02.936 18249.255 - 18350.080: 62.7273% ( 144) 00:08:02.936 18350.080 - 18450.905: 64.8153% ( 147) 00:08:02.936 18450.905 - 18551.729: 67.0312% ( 156) 00:08:02.936 18551.729 - 18652.554: 68.9347% ( 134) 00:08:02.936 18652.554 - 18753.378: 70.6250% ( 119) 00:08:02.936 18753.378 - 18854.203: 72.2443% ( 114) 00:08:02.936 18854.203 - 18955.028: 73.9631% ( 121) 00:08:02.936 18955.028 - 19055.852: 75.4545% ( 105) 00:08:02.936 19055.852 - 19156.677: 76.7756% ( 93) 00:08:02.936 19156.677 - 19257.502: 77.8551% ( 76) 00:08:02.936 19257.502 - 19358.326: 78.7926% ( 66) 00:08:02.936 19358.326 - 19459.151: 79.8580% ( 75) 00:08:02.936 19459.151 - 19559.975: 80.9517% ( 77) 00:08:02.936 19559.975 - 19660.800: 81.9176% ( 68) 00:08:02.936 19660.800 - 19761.625: 82.7273% ( 57) 00:08:02.936 19761.625 - 19862.449: 83.4091% ( 48) 00:08:02.936 19862.449 - 19963.274: 84.1335% ( 51) 00:08:02.936 19963.274 - 20064.098: 84.9148% ( 55) 00:08:02.936 20064.098 - 20164.923: 85.5682% ( 46) 00:08:02.936 20164.923 - 20265.748: 86.3494% ( 55) 00:08:02.936 20265.748 - 20366.572: 87.1307% ( 55) 00:08:02.936 20366.572 - 20467.397: 87.8409% ( 50) 00:08:02.936 20467.397 - 20568.222: 88.5511% ( 50) 00:08:02.936 20568.222 - 20669.046: 89.2756% ( 51) 00:08:02.936 20669.046 - 20769.871: 89.9574% ( 48) 00:08:02.936 20769.871 - 20870.695: 90.6818% ( 51) 00:08:02.936 20870.695 - 20971.520: 91.3778% ( 49) 00:08:02.936 20971.520 - 21072.345: 92.1449% ( 54) 00:08:02.936 21072.345 - 21173.169: 92.8977% ( 53) 00:08:02.936 21173.169 - 21273.994: 93.5795% ( 48) 00:08:02.936 21273.994 - 21374.818: 94.1477% ( 40) 00:08:02.936 21374.818 - 21475.643: 94.8011% ( 46) 00:08:02.936 21475.643 - 21576.468: 95.2131% ( 29) 00:08:02.936 21576.468 - 21677.292: 95.6250% ( 29) 00:08:02.936 21677.292 - 21778.117: 96.0511% ( 30) 00:08:02.936 21778.117 - 21878.942: 96.4062% ( 25) 00:08:02.936 21878.942 - 21979.766: 96.7472% ( 24) 00:08:02.936 21979.766 - 22080.591: 97.0312% ( 20) 00:08:02.936 22080.591 - 22181.415: 97.3011% ( 19) 00:08:02.936 22181.415 - 22282.240: 97.5426% ( 17) 00:08:02.936 22282.240 - 22383.065: 97.7699% ( 16) 00:08:02.936 22383.065 - 22483.889: 97.9403% ( 12) 00:08:02.936 22483.889 - 22584.714: 97.9972% ( 4) 00:08:02.936 22584.714 - 22685.538: 98.0398% ( 3) 00:08:02.936 22685.538 - 22786.363: 98.0682% ( 2) 00:08:02.936 22786.363 - 22887.188: 98.1108% ( 3) 00:08:02.936 22887.188 - 22988.012: 98.1534% ( 3) 00:08:02.936 22988.012 - 23088.837: 98.1818% ( 2) 00:08:02.936 29440.788 - 29642.437: 98.2386% ( 4) 00:08:02.936 29642.437 - 29844.086: 98.3239% ( 6) 00:08:02.936 29844.086 - 30045.735: 98.4233% ( 7) 00:08:02.936 30045.735 - 30247.385: 98.5227% ( 7) 00:08:02.936 30247.385 - 30449.034: 98.6080% ( 6) 00:08:02.936 30449.034 - 30650.683: 98.7074% ( 7) 00:08:02.936 30650.683 - 30852.332: 98.8068% ( 7) 00:08:02.936 30852.332 - 31053.982: 98.9062% ( 7) 00:08:02.936 31053.982 - 31255.631: 99.0057% ( 7) 00:08:02.936 31255.631 - 31457.280: 99.0909% ( 6) 00:08:02.936 41539.742 - 41741.391: 99.1335% ( 3) 00:08:02.936 41741.391 - 41943.040: 99.2045% ( 5) 00:08:02.936 41943.040 - 42144.689: 99.2756% ( 5) 00:08:02.936 42144.689 - 42346.338: 99.3608% ( 6) 00:08:02.936 42346.338 - 42547.988: 99.4602% ( 7) 00:08:02.936 42547.988 - 42749.637: 99.5597% ( 7) 00:08:02.936 42749.637 - 42951.286: 99.6449% ( 6) 00:08:02.936 42951.286 - 43152.935: 99.7443% ( 7) 00:08:02.936 43152.935 - 43354.585: 99.8438% ( 7) 00:08:02.936 43354.585 - 43556.234: 99.9432% ( 7) 00:08:02.936 43556.234 - 43757.883: 100.0000% ( 4) 00:08:02.936 00:08:02.936 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:02.936 ============================================================================== 00:08:02.936 Range in us Cumulative IO count 00:08:02.936 10586.585 - 10636.997: 0.0568% ( 4) 00:08:02.936 10636.997 - 10687.409: 0.0852% ( 2) 00:08:02.936 10687.409 - 10737.822: 0.1420% ( 4) 00:08:02.936 10737.822 - 10788.234: 0.1847% ( 3) 00:08:02.936 10788.234 - 10838.646: 0.2557% ( 5) 00:08:02.936 10838.646 - 10889.058: 0.2699% ( 1) 00:08:02.936 10889.058 - 10939.471: 0.2983% ( 2) 00:08:02.936 10939.471 - 10989.883: 0.3267% ( 2) 00:08:02.936 10989.883 - 11040.295: 0.3551% ( 2) 00:08:02.936 11040.295 - 11090.708: 0.3835% ( 2) 00:08:02.936 11090.708 - 11141.120: 0.4119% ( 2) 00:08:02.936 11141.120 - 11191.532: 0.4545% ( 3) 00:08:02.936 11191.532 - 11241.945: 0.4830% ( 2) 00:08:02.936 11241.945 - 11292.357: 0.5256% ( 3) 00:08:02.936 11292.357 - 11342.769: 0.5540% ( 2) 00:08:02.936 11342.769 - 11393.182: 0.5824% ( 2) 00:08:02.936 11393.182 - 11443.594: 0.6250% ( 3) 00:08:02.936 11443.594 - 11494.006: 0.6534% ( 2) 00:08:02.936 11494.006 - 11544.418: 0.6818% ( 2) 00:08:02.936 11544.418 - 11594.831: 0.7244% ( 3) 00:08:02.936 11594.831 - 11645.243: 0.7528% ( 2) 00:08:02.936 11645.243 - 11695.655: 0.7955% ( 3) 00:08:02.936 11695.655 - 11746.068: 0.8097% ( 1) 00:08:02.936 11746.068 - 11796.480: 0.8523% ( 3) 00:08:02.936 11796.480 - 11846.892: 0.8807% ( 2) 00:08:02.936 11846.892 - 11897.305: 0.9091% ( 2) 00:08:02.936 14014.622 - 14115.446: 0.9659% ( 4) 00:08:02.936 14115.446 - 14216.271: 1.0227% ( 4) 00:08:02.936 14216.271 - 14317.095: 1.0653% ( 3) 00:08:02.936 14317.095 - 14417.920: 1.1222% ( 4) 00:08:02.936 14417.920 - 14518.745: 1.1932% ( 5) 00:08:02.936 14518.745 - 14619.569: 1.3352% ( 10) 00:08:02.936 14619.569 - 14720.394: 1.4773% ( 10) 00:08:02.936 14720.394 - 14821.218: 1.6193% ( 10) 00:08:02.936 14821.218 - 14922.043: 1.8892% ( 19) 00:08:02.936 14922.043 - 15022.868: 2.2585% ( 26) 00:08:02.936 15022.868 - 15123.692: 2.6705% ( 29) 00:08:02.936 15123.692 - 15224.517: 3.1392% ( 33) 00:08:02.936 15224.517 - 15325.342: 3.6932% ( 39) 00:08:02.936 15325.342 - 15426.166: 4.2898% ( 42) 00:08:02.936 15426.166 - 15526.991: 4.8438% ( 39) 00:08:02.936 15526.991 - 15627.815: 5.6534% ( 57) 00:08:02.936 15627.815 - 15728.640: 6.6193% ( 68) 00:08:02.936 15728.640 - 15829.465: 7.6562% ( 73) 00:08:02.936 15829.465 - 15930.289: 8.9773% ( 93) 00:08:02.936 15930.289 - 16031.114: 10.4830% ( 106) 00:08:02.936 16031.114 - 16131.938: 12.2869% ( 127) 00:08:02.936 16131.938 - 16232.763: 14.2614% ( 139) 00:08:02.936 16232.763 - 16333.588: 16.4347% ( 153) 00:08:02.936 16333.588 - 16434.412: 18.8494% ( 170) 00:08:02.936 16434.412 - 16535.237: 21.3778% ( 178) 00:08:02.936 16535.237 - 16636.062: 23.8636% ( 175) 00:08:02.937 16636.062 - 16736.886: 26.3778% ( 177) 00:08:02.937 16736.886 - 16837.711: 28.5795% ( 155) 00:08:02.937 16837.711 - 16938.535: 30.7528% ( 153) 00:08:02.937 16938.535 - 17039.360: 33.1676% ( 170) 00:08:02.937 17039.360 - 17140.185: 35.7386% ( 181) 00:08:02.937 17140.185 - 17241.009: 37.9830% ( 158) 00:08:02.937 17241.009 - 17341.834: 40.5824% ( 183) 00:08:02.937 17341.834 - 17442.658: 43.2386% ( 187) 00:08:02.937 17442.658 - 17543.483: 45.6392% ( 169) 00:08:02.937 17543.483 - 17644.308: 47.9688% ( 164) 00:08:02.937 17644.308 - 17745.132: 50.5824% ( 184) 00:08:02.937 17745.132 - 17845.957: 53.3523% ( 195) 00:08:02.937 17845.957 - 17946.782: 56.0938% ( 193) 00:08:02.937 17946.782 - 18047.606: 58.8494% ( 194) 00:08:02.937 18047.606 - 18148.431: 61.3068% ( 173) 00:08:02.937 18148.431 - 18249.255: 63.4375% ( 150) 00:08:02.937 18249.255 - 18350.080: 65.5114% ( 146) 00:08:02.937 18350.080 - 18450.905: 67.5142% ( 141) 00:08:02.937 18450.905 - 18551.729: 69.4886% ( 139) 00:08:02.937 18551.729 - 18652.554: 71.3068% ( 128) 00:08:02.937 18652.554 - 18753.378: 73.0540% ( 123) 00:08:02.937 18753.378 - 18854.203: 74.6449% ( 112) 00:08:02.937 18854.203 - 18955.028: 75.8523% ( 85) 00:08:02.937 18955.028 - 19055.852: 76.9460% ( 77) 00:08:02.937 19055.852 - 19156.677: 77.9403% ( 70) 00:08:02.937 19156.677 - 19257.502: 78.7500% ( 57) 00:08:02.937 19257.502 - 19358.326: 79.5881% ( 59) 00:08:02.937 19358.326 - 19459.151: 80.4261% ( 59) 00:08:02.937 19459.151 - 19559.975: 81.2500% ( 58) 00:08:02.937 19559.975 - 19660.800: 82.0739% ( 58) 00:08:02.937 19660.800 - 19761.625: 82.8125% ( 52) 00:08:02.937 19761.625 - 19862.449: 83.5227% ( 50) 00:08:02.937 19862.449 - 19963.274: 84.1903% ( 47) 00:08:02.937 19963.274 - 20064.098: 84.9006% ( 50) 00:08:02.937 20064.098 - 20164.923: 85.5824% ( 48) 00:08:02.937 20164.923 - 20265.748: 86.1648% ( 41) 00:08:02.937 20265.748 - 20366.572: 86.6903% ( 37) 00:08:02.937 20366.572 - 20467.397: 87.5000% ( 57) 00:08:02.937 20467.397 - 20568.222: 88.0966% ( 42) 00:08:02.937 20568.222 - 20669.046: 88.7784% ( 48) 00:08:02.937 20669.046 - 20769.871: 89.4602% ( 48) 00:08:02.937 20769.871 - 20870.695: 90.0994% ( 45) 00:08:02.937 20870.695 - 20971.520: 90.7670% ( 47) 00:08:02.937 20971.520 - 21072.345: 91.4062% ( 45) 00:08:02.937 21072.345 - 21173.169: 92.1591% ( 53) 00:08:02.937 21173.169 - 21273.994: 92.8977% ( 52) 00:08:02.937 21273.994 - 21374.818: 93.6506% ( 53) 00:08:02.937 21374.818 - 21475.643: 94.2756% ( 44) 00:08:02.937 21475.643 - 21576.468: 94.8864% ( 43) 00:08:02.937 21576.468 - 21677.292: 95.4119% ( 37) 00:08:02.937 21677.292 - 21778.117: 95.9233% ( 36) 00:08:02.937 21778.117 - 21878.942: 96.3494% ( 30) 00:08:02.937 21878.942 - 21979.766: 96.6903% ( 24) 00:08:02.937 21979.766 - 22080.591: 97.0028% ( 22) 00:08:02.937 22080.591 - 22181.415: 97.2727% ( 19) 00:08:02.937 22181.415 - 22282.240: 97.5142% ( 17) 00:08:02.937 22282.240 - 22383.065: 97.7415% ( 16) 00:08:02.937 22383.065 - 22483.889: 97.9261% ( 13) 00:08:02.937 22483.889 - 22584.714: 98.0398% ( 8) 00:08:02.937 22584.714 - 22685.538: 98.0966% ( 4) 00:08:02.937 22685.538 - 22786.363: 98.1392% ( 3) 00:08:02.937 22786.363 - 22887.188: 98.1676% ( 2) 00:08:02.937 22887.188 - 22988.012: 98.1818% ( 1) 00:08:02.937 28029.243 - 28230.892: 98.2244% ( 3) 00:08:02.937 28230.892 - 28432.542: 98.3097% ( 6) 00:08:02.937 28432.542 - 28634.191: 98.4091% ( 7) 00:08:02.937 28634.191 - 28835.840: 98.5085% ( 7) 00:08:02.937 28835.840 - 29037.489: 98.6080% ( 7) 00:08:02.937 29037.489 - 29239.138: 98.7074% ( 7) 00:08:02.937 29239.138 - 29440.788: 98.8068% ( 7) 00:08:02.937 29440.788 - 29642.437: 98.9062% ( 7) 00:08:02.937 29642.437 - 29844.086: 99.0057% ( 7) 00:08:02.937 29844.086 - 30045.735: 99.0909% ( 6) 00:08:02.937 40329.846 - 40531.495: 99.1193% ( 2) 00:08:02.937 40531.495 - 40733.145: 99.2045% ( 6) 00:08:02.937 40733.145 - 40934.794: 99.2898% ( 6) 00:08:02.937 40934.794 - 41136.443: 99.3750% ( 6) 00:08:02.937 41136.443 - 41338.092: 99.4602% ( 6) 00:08:02.937 41338.092 - 41539.742: 99.5597% ( 7) 00:08:02.937 41539.742 - 41741.391: 99.6591% ( 7) 00:08:02.937 41741.391 - 41943.040: 99.7585% ( 7) 00:08:02.937 41943.040 - 42144.689: 99.8580% ( 7) 00:08:02.937 42144.689 - 42346.338: 99.9574% ( 7) 00:08:02.937 42346.338 - 42547.988: 100.0000% ( 3) 00:08:02.937 00:08:02.937 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:02.937 ============================================================================== 00:08:02.937 Range in us Cumulative IO count 00:08:02.937 10233.698 - 10284.111: 0.0282% ( 2) 00:08:02.937 10284.111 - 10334.523: 0.0563% ( 2) 00:08:02.937 10334.523 - 10384.935: 0.0845% ( 2) 00:08:02.937 10384.935 - 10435.348: 0.1267% ( 3) 00:08:02.937 10435.348 - 10485.760: 0.1548% ( 2) 00:08:02.937 10485.760 - 10536.172: 0.1830% ( 2) 00:08:02.937 10536.172 - 10586.585: 0.2252% ( 3) 00:08:02.937 10586.585 - 10636.997: 0.2534% ( 2) 00:08:02.937 10636.997 - 10687.409: 0.2815% ( 2) 00:08:02.937 10687.409 - 10737.822: 0.3097% ( 2) 00:08:02.937 10737.822 - 10788.234: 0.3519% ( 3) 00:08:02.937 10788.234 - 10838.646: 0.4645% ( 8) 00:08:02.937 10838.646 - 10889.058: 0.5631% ( 7) 00:08:02.937 10889.058 - 10939.471: 0.6053% ( 3) 00:08:02.937 10939.471 - 10989.883: 0.6475% ( 3) 00:08:02.937 10989.883 - 11040.295: 0.6757% ( 2) 00:08:02.937 11040.295 - 11090.708: 0.7038% ( 2) 00:08:02.937 11090.708 - 11141.120: 0.7320% ( 2) 00:08:02.937 11141.120 - 11191.532: 0.7601% ( 2) 00:08:02.937 11191.532 - 11241.945: 0.7883% ( 2) 00:08:02.937 11241.945 - 11292.357: 0.8305% ( 3) 00:08:02.937 11292.357 - 11342.769: 0.8587% ( 2) 00:08:02.937 11342.769 - 11393.182: 0.8868% ( 2) 00:08:02.937 11393.182 - 11443.594: 0.9009% ( 1) 00:08:02.937 13913.797 - 14014.622: 0.9431% ( 3) 00:08:02.937 14014.622 - 14115.446: 1.0135% ( 5) 00:08:02.937 14115.446 - 14216.271: 1.0839% ( 5) 00:08:02.937 14216.271 - 14317.095: 1.1543% ( 5) 00:08:02.937 14317.095 - 14417.920: 1.2247% ( 5) 00:08:02.937 14417.920 - 14518.745: 1.2950% ( 5) 00:08:02.937 14518.745 - 14619.569: 1.3654% ( 5) 00:08:02.937 14619.569 - 14720.394: 1.4358% ( 5) 00:08:02.937 14720.394 - 14821.218: 1.5907% ( 11) 00:08:02.937 14821.218 - 14922.043: 1.7173% ( 9) 00:08:02.937 14922.043 - 15022.868: 1.9003% ( 13) 00:08:02.937 15022.868 - 15123.692: 2.2241% ( 23) 00:08:02.937 15123.692 - 15224.517: 2.5197% ( 21) 00:08:02.937 15224.517 - 15325.342: 2.8998% ( 27) 00:08:02.937 15325.342 - 15426.166: 3.4628% ( 40) 00:08:02.937 15426.166 - 15526.991: 4.2370% ( 55) 00:08:02.937 15526.991 - 15627.815: 4.9268% ( 49) 00:08:02.937 15627.815 - 15728.640: 6.0389% ( 79) 00:08:02.937 15728.640 - 15829.465: 7.6014% ( 111) 00:08:02.937 15829.465 - 15930.289: 9.1639% ( 111) 00:08:02.937 15930.289 - 16031.114: 11.0642% ( 135) 00:08:02.937 16031.114 - 16131.938: 12.8660% ( 128) 00:08:02.937 16131.938 - 16232.763: 14.7241% ( 132) 00:08:02.937 16232.763 - 16333.588: 16.7511% ( 144) 00:08:02.937 16333.588 - 16434.412: 19.1019% ( 167) 00:08:02.937 16434.412 - 16535.237: 21.4386% ( 166) 00:08:02.937 16535.237 - 16636.062: 24.0146% ( 183) 00:08:02.937 16636.062 - 16736.886: 26.7173% ( 192) 00:08:02.937 16736.886 - 16837.711: 29.2230% ( 178) 00:08:02.937 16837.711 - 16938.535: 31.7990% ( 183) 00:08:02.937 16938.535 - 17039.360: 34.4735% ( 190) 00:08:02.937 17039.360 - 17140.185: 36.9510% ( 176) 00:08:02.937 17140.185 - 17241.009: 39.5552% ( 185) 00:08:02.937 17241.009 - 17341.834: 42.3142% ( 196) 00:08:02.937 17341.834 - 17442.658: 44.9887% ( 190) 00:08:02.937 17442.658 - 17543.483: 47.6774% ( 191) 00:08:02.937 17543.483 - 17644.308: 50.1689% ( 177) 00:08:02.937 17644.308 - 17745.132: 52.4916% ( 165) 00:08:02.937 17745.132 - 17845.957: 54.5608% ( 147) 00:08:02.937 17845.957 - 17946.782: 56.6160% ( 146) 00:08:02.937 17946.782 - 18047.606: 59.0935% ( 176) 00:08:02.937 18047.606 - 18148.431: 61.4020% ( 164) 00:08:02.937 18148.431 - 18249.255: 63.3727% ( 140) 00:08:02.937 18249.255 - 18350.080: 65.2168% ( 131) 00:08:02.937 18350.080 - 18450.905: 66.8497% ( 116) 00:08:02.937 18450.905 - 18551.729: 68.3418% ( 106) 00:08:02.937 18551.729 - 18652.554: 69.8620% ( 108) 00:08:02.937 18652.554 - 18753.378: 71.1993% ( 95) 00:08:02.937 18753.378 - 18854.203: 72.5366% ( 95) 00:08:02.937 18854.203 - 18955.028: 73.8598% ( 94) 00:08:02.937 18955.028 - 19055.852: 75.3097% ( 103) 00:08:02.937 19055.852 - 19156.677: 76.4217% ( 79) 00:08:02.937 19156.677 - 19257.502: 77.4212% ( 71) 00:08:02.937 19257.502 - 19358.326: 78.4910% ( 76) 00:08:02.937 19358.326 - 19459.151: 79.6030% ( 79) 00:08:02.937 19459.151 - 19559.975: 80.6588% ( 75) 00:08:02.937 19559.975 - 19660.800: 81.7427% ( 77) 00:08:02.937 19660.800 - 19761.625: 82.6858% ( 67) 00:08:02.937 19761.625 - 19862.449: 83.6008% ( 65) 00:08:02.937 19862.449 - 19963.274: 84.4454% ( 60) 00:08:02.937 19963.274 - 20064.098: 85.2477% ( 57) 00:08:02.937 20064.098 - 20164.923: 86.0079% ( 54) 00:08:02.937 20164.923 - 20265.748: 86.6976% ( 49) 00:08:02.937 20265.748 - 20366.572: 87.4859% ( 56) 00:08:02.938 20366.572 - 20467.397: 88.2179% ( 52) 00:08:02.938 20467.397 - 20568.222: 89.0625% ( 60) 00:08:02.938 20568.222 - 20669.046: 89.7945% ( 52) 00:08:02.938 20669.046 - 20769.871: 90.6672% ( 62) 00:08:02.938 20769.871 - 20870.695: 91.5400% ( 62) 00:08:02.938 20870.695 - 20971.520: 92.2579% ( 51) 00:08:02.938 20971.520 - 21072.345: 92.9758% ( 51) 00:08:02.938 21072.345 - 21173.169: 93.5248% ( 39) 00:08:02.938 21173.169 - 21273.994: 94.0878% ( 40) 00:08:02.938 21273.994 - 21374.818: 94.6227% ( 38) 00:08:02.938 21374.818 - 21475.643: 95.2280% ( 43) 00:08:02.938 21475.643 - 21576.468: 95.7207% ( 35) 00:08:02.938 21576.468 - 21677.292: 96.2134% ( 35) 00:08:02.938 21677.292 - 21778.117: 96.6779% ( 33) 00:08:02.938 21778.117 - 21878.942: 97.0861% ( 29) 00:08:02.938 21878.942 - 21979.766: 97.4662% ( 27) 00:08:02.938 21979.766 - 22080.591: 97.8041% ( 24) 00:08:02.938 22080.591 - 22181.415: 98.1560% ( 25) 00:08:02.938 22181.415 - 22282.240: 98.4375% ( 20) 00:08:02.938 22282.240 - 22383.065: 98.6064% ( 12) 00:08:02.938 22383.065 - 22483.889: 98.7050% ( 7) 00:08:02.938 22483.889 - 22584.714: 98.7613% ( 4) 00:08:02.938 22584.714 - 22685.538: 98.8035% ( 3) 00:08:02.938 22685.538 - 22786.363: 98.8457% ( 3) 00:08:02.938 22786.363 - 22887.188: 98.8880% ( 3) 00:08:02.938 22887.188 - 22988.012: 98.9302% ( 3) 00:08:02.938 22988.012 - 23088.837: 98.9724% ( 3) 00:08:02.938 23088.837 - 23189.662: 99.0146% ( 3) 00:08:02.938 23189.662 - 23290.486: 99.0709% ( 4) 00:08:02.938 23290.486 - 23391.311: 99.0991% ( 2) 00:08:02.938 27827.594 - 28029.243: 99.1132% ( 1) 00:08:02.938 28029.243 - 28230.892: 99.1976% ( 6) 00:08:02.938 28230.892 - 28432.542: 99.2962% ( 7) 00:08:02.938 28432.542 - 28634.191: 99.3947% ( 7) 00:08:02.938 28634.191 - 28835.840: 99.4792% ( 6) 00:08:02.938 28835.840 - 29037.489: 99.5777% ( 7) 00:08:02.938 29037.489 - 29239.138: 99.6903% ( 8) 00:08:02.938 29239.138 - 29440.788: 99.7889% ( 7) 00:08:02.938 29440.788 - 29642.437: 99.8874% ( 7) 00:08:02.938 29642.437 - 29844.086: 99.9718% ( 6) 00:08:02.938 29844.086 - 30045.735: 100.0000% ( 2) 00:08:02.938 00:08:02.938 16:34:47 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:04.328 Initializing NVMe Controllers 00:08:04.329 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:04.329 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:04.329 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:04.329 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:04.329 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:04.329 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:04.329 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:04.329 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:04.329 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:04.329 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:04.329 Initialization complete. Launching workers. 00:08:04.329 ======================================================== 00:08:04.329 Latency(us) 00:08:04.329 Device Information : IOPS MiB/s Average min max 00:08:04.329 PCIE (0000:00:10.0) NSID 1 from core 0: 6548.34 76.74 19587.20 10083.20 96529.13 00:08:04.329 PCIE (0000:00:11.0) NSID 1 from core 0: 6548.34 76.74 19551.70 9951.23 95851.69 00:08:04.329 PCIE (0000:00:13.0) NSID 1 from core 0: 6537.42 76.61 19548.29 8676.18 97986.44 00:08:04.329 PCIE (0000:00:12.0) NSID 1 from core 0: 6478.81 75.92 19686.43 7975.88 107819.13 00:08:04.329 PCIE (0000:00:12.0) NSID 2 from core 0: 6548.34 76.74 19438.76 9886.82 94831.34 00:08:04.329 PCIE (0000:00:12.0) NSID 3 from core 0: 6548.34 76.74 19401.66 11191.39 95466.08 00:08:04.329 ======================================================== 00:08:04.329 Total : 39209.59 459.49 19535.40 7975.88 107819.13 00:08:04.329 00:08:04.329 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:04.329 ================================================================================= 00:08:04.329 1.00000% : 11342.769us 00:08:04.329 10.00000% : 15627.815us 00:08:04.329 25.00000% : 16434.412us 00:08:04.329 50.00000% : 17745.132us 00:08:04.329 75.00000% : 19358.326us 00:08:04.329 90.00000% : 20870.695us 00:08:04.329 95.00000% : 22786.363us 00:08:04.329 98.00000% : 45976.025us 00:08:04.329 99.00000% : 90742.154us 00:08:04.329 99.50000% : 95985.034us 00:08:04.329 99.90000% : 96388.332us 00:08:04.329 99.99000% : 96791.631us 00:08:04.329 99.99900% : 96791.631us 00:08:04.329 99.99990% : 96791.631us 00:08:04.329 99.99999% : 96791.631us 00:08:04.329 00:08:04.329 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:04.329 ================================================================================= 00:08:04.329 1.00000% : 11141.120us 00:08:04.329 10.00000% : 15627.815us 00:08:04.329 25.00000% : 16333.588us 00:08:04.329 50.00000% : 17745.132us 00:08:04.329 75.00000% : 19257.502us 00:08:04.329 90.00000% : 20669.046us 00:08:04.329 95.00000% : 22887.188us 00:08:04.329 98.00000% : 44564.480us 00:08:04.329 99.00000% : 91145.452us 00:08:04.329 99.50000% : 95985.034us 00:08:04.329 99.90000% : 95985.034us 00:08:04.329 99.99000% : 95985.034us 00:08:04.329 99.99900% : 95985.034us 00:08:04.329 99.99990% : 95985.034us 00:08:04.329 99.99999% : 95985.034us 00:08:04.329 00:08:04.329 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:04.329 ================================================================================= 00:08:04.329 1.00000% : 9628.751us 00:08:04.329 10.00000% : 15325.342us 00:08:04.329 25.00000% : 16232.763us 00:08:04.329 50.00000% : 17745.132us 00:08:04.329 75.00000% : 19257.502us 00:08:04.329 90.00000% : 20870.695us 00:08:04.329 95.00000% : 23290.486us 00:08:04.329 98.00000% : 44564.480us 00:08:04.329 99.00000% : 93161.945us 00:08:04.329 99.50000% : 97598.228us 00:08:04.329 99.90000% : 98001.526us 00:08:04.329 99.99000% : 98001.526us 00:08:04.329 99.99900% : 98001.526us 00:08:04.329 99.99990% : 98001.526us 00:08:04.329 99.99999% : 98001.526us 00:08:04.329 00:08:04.329 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:04.329 ================================================================================= 00:08:04.329 1.00000% : 12401.428us 00:08:04.329 10.00000% : 15426.166us 00:08:04.329 25.00000% : 16434.412us 00:08:04.329 50.00000% : 17745.132us 00:08:04.329 75.00000% : 19156.677us 00:08:04.329 90.00000% : 21173.169us 00:08:04.329 95.00000% : 25811.102us 00:08:04.329 98.00000% : 42749.637us 00:08:04.329 99.00000% : 99211.422us 00:08:04.329 99.50000% : 108083.988us 00:08:04.329 99.90000% : 108083.988us 00:08:04.329 99.99000% : 108083.988us 00:08:04.329 99.99900% : 108083.988us 00:08:04.329 99.99990% : 108083.988us 00:08:04.329 99.99999% : 108083.988us 00:08:04.329 00:08:04.329 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:04.329 ================================================================================= 00:08:04.329 1.00000% : 12300.603us 00:08:04.329 10.00000% : 15526.991us 00:08:04.329 25.00000% : 16333.588us 00:08:04.329 50.00000% : 17644.308us 00:08:04.329 75.00000% : 19358.326us 00:08:04.329 90.00000% : 20971.520us 00:08:04.329 95.00000% : 22383.065us 00:08:04.329 98.00000% : 40733.145us 00:08:04.329 99.00000% : 87112.468us 00:08:04.329 99.50000% : 94371.840us 00:08:04.329 99.90000% : 94775.138us 00:08:04.329 99.99000% : 95178.437us 00:08:04.329 99.99900% : 95178.437us 00:08:04.329 99.99990% : 95178.437us 00:08:04.329 99.99999% : 95178.437us 00:08:04.329 00:08:04.329 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:04.329 ================================================================================= 00:08:04.329 1.00000% : 11846.892us 00:08:04.329 10.00000% : 15526.991us 00:08:04.329 25.00000% : 16434.412us 00:08:04.329 50.00000% : 17745.132us 00:08:04.329 75.00000% : 19459.151us 00:08:04.329 90.00000% : 20870.695us 00:08:04.329 95.00000% : 23088.837us 00:08:04.329 98.00000% : 39523.249us 00:08:04.329 99.00000% : 85095.975us 00:08:04.329 99.50000% : 95178.437us 00:08:04.329 99.90000% : 95581.735us 00:08:04.329 99.99000% : 95581.735us 00:08:04.329 99.99900% : 95581.735us 00:08:04.329 99.99990% : 95581.735us 00:08:04.329 99.99999% : 95581.735us 00:08:04.329 00:08:04.329 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:04.329 ============================================================================== 00:08:04.329 Range in us Cumulative IO count 00:08:04.329 10082.462 - 10132.874: 0.0152% ( 1) 00:08:04.329 10132.874 - 10183.286: 0.0455% ( 2) 00:08:04.329 10183.286 - 10233.698: 0.1062% ( 4) 00:08:04.329 10233.698 - 10284.111: 0.1517% ( 3) 00:08:04.329 10284.111 - 10334.523: 0.1972% ( 3) 00:08:04.329 10334.523 - 10384.935: 0.2275% ( 2) 00:08:04.329 10384.935 - 10435.348: 0.3186% ( 6) 00:08:04.329 10435.348 - 10485.760: 0.3792% ( 4) 00:08:04.329 10485.760 - 10536.172: 0.4096% ( 2) 00:08:04.329 10536.172 - 10586.585: 0.4854% ( 5) 00:08:04.329 10788.234 - 10838.646: 0.5006% ( 1) 00:08:04.329 10838.646 - 10889.058: 0.5309% ( 2) 00:08:04.329 10889.058 - 10939.471: 0.5613% ( 2) 00:08:04.329 10939.471 - 10989.883: 0.5765% ( 1) 00:08:04.329 10989.883 - 11040.295: 0.6371% ( 4) 00:08:04.329 11040.295 - 11090.708: 0.6675% ( 2) 00:08:04.329 11090.708 - 11141.120: 0.6978% ( 2) 00:08:04.329 11141.120 - 11191.532: 0.7282% ( 2) 00:08:04.329 11191.532 - 11241.945: 0.8040% ( 5) 00:08:04.329 11241.945 - 11292.357: 0.9102% ( 7) 00:08:04.329 11292.357 - 11342.769: 1.0316% ( 8) 00:08:04.329 11342.769 - 11393.182: 1.1226% ( 6) 00:08:04.329 11393.182 - 11443.594: 1.1681% ( 3) 00:08:04.329 11443.594 - 11494.006: 1.2894% ( 8) 00:08:04.329 11494.006 - 11544.418: 1.3046% ( 1) 00:08:04.329 11544.418 - 11594.831: 1.3350% ( 2) 00:08:04.329 11594.831 - 11645.243: 1.3805% ( 3) 00:08:04.329 11645.243 - 11695.655: 1.4260% ( 3) 00:08:04.329 11695.655 - 11746.068: 1.4411% ( 1) 00:08:04.329 11746.068 - 11796.480: 1.5018% ( 4) 00:08:04.329 11796.480 - 11846.892: 1.5170% ( 1) 00:08:04.329 11897.305 - 11947.717: 1.5625% ( 3) 00:08:04.329 11998.129 - 12048.542: 1.5928% ( 2) 00:08:04.329 12048.542 - 12098.954: 1.6232% ( 2) 00:08:04.329 12098.954 - 12149.366: 1.6535% ( 2) 00:08:04.329 12149.366 - 12199.778: 1.6687% ( 1) 00:08:04.329 12199.778 - 12250.191: 1.6990% ( 2) 00:08:04.329 12250.191 - 12300.603: 1.7142% ( 1) 00:08:04.329 12300.603 - 12351.015: 1.7597% ( 3) 00:08:04.329 12351.015 - 12401.428: 1.7749% ( 1) 00:08:04.329 12401.428 - 12451.840: 1.8052% ( 2) 00:08:04.329 12451.840 - 12502.252: 1.8356% ( 2) 00:08:04.329 12502.252 - 12552.665: 1.8659% ( 2) 00:08:04.329 12552.665 - 12603.077: 1.8811% ( 1) 00:08:04.329 12603.077 - 12653.489: 1.9114% ( 2) 00:08:04.329 12653.489 - 12703.902: 1.9417% ( 2) 00:08:04.329 14014.622 - 14115.446: 2.0631% ( 8) 00:08:04.329 14115.446 - 14216.271: 2.2603% ( 13) 00:08:04.329 14216.271 - 14317.095: 2.6092% ( 23) 00:08:04.329 14317.095 - 14417.920: 2.8368% ( 15) 00:08:04.329 14417.920 - 14518.745: 2.9733% ( 9) 00:08:04.329 14518.745 - 14619.569: 3.2312% ( 17) 00:08:04.329 14619.569 - 14720.394: 3.8835% ( 43) 00:08:04.329 14720.394 - 14821.218: 4.4144% ( 35) 00:08:04.329 14821.218 - 14922.043: 4.8695% ( 30) 00:08:04.329 14922.043 - 15022.868: 5.2488% ( 25) 00:08:04.329 15022.868 - 15123.692: 5.9314% ( 45) 00:08:04.329 15123.692 - 15224.517: 6.7203% ( 52) 00:08:04.330 15224.517 - 15325.342: 7.9035% ( 78) 00:08:04.330 15325.342 - 15426.166: 8.9047% ( 66) 00:08:04.330 15426.166 - 15526.991: 9.8604% ( 63) 00:08:04.330 15526.991 - 15627.815: 11.4381% ( 104) 00:08:04.330 15627.815 - 15728.640: 13.6226% ( 144) 00:08:04.330 15728.640 - 15829.465: 15.3671% ( 115) 00:08:04.330 15829.465 - 15930.289: 17.0965% ( 114) 00:08:04.330 15930.289 - 16031.114: 18.6286% ( 101) 00:08:04.330 16031.114 - 16131.938: 20.3883% ( 116) 00:08:04.330 16131.938 - 16232.763: 22.6032% ( 146) 00:08:04.330 16232.763 - 16333.588: 24.1960% ( 105) 00:08:04.330 16333.588 - 16434.412: 26.1377% ( 128) 00:08:04.330 16434.412 - 16535.237: 28.4132% ( 150) 00:08:04.330 16535.237 - 16636.062: 30.5370% ( 140) 00:08:04.330 16636.062 - 16736.886: 32.2816% ( 115) 00:08:04.330 16736.886 - 16837.711: 34.6784% ( 158) 00:08:04.330 16837.711 - 16938.535: 36.7112% ( 134) 00:08:04.330 16938.535 - 17039.360: 38.8956% ( 144) 00:08:04.330 17039.360 - 17140.185: 41.1408% ( 148) 00:08:04.330 17140.185 - 17241.009: 42.8550% ( 113) 00:08:04.330 17241.009 - 17341.834: 44.2961% ( 95) 00:08:04.330 17341.834 - 17442.658: 45.7979% ( 99) 00:08:04.330 17442.658 - 17543.483: 47.4515% ( 109) 00:08:04.330 17543.483 - 17644.308: 49.3629% ( 126) 00:08:04.330 17644.308 - 17745.132: 51.1377% ( 117) 00:08:04.330 17745.132 - 17845.957: 52.7761% ( 108) 00:08:04.330 17845.957 - 17946.782: 54.5358% ( 116) 00:08:04.330 17946.782 - 18047.606: 56.4017% ( 123) 00:08:04.330 18047.606 - 18148.431: 58.2069% ( 119) 00:08:04.330 18148.431 - 18249.255: 59.6329% ( 94) 00:08:04.330 18249.255 - 18350.080: 61.2561% ( 107) 00:08:04.330 18350.080 - 18450.905: 62.8489% ( 105) 00:08:04.330 18450.905 - 18551.729: 64.4266% ( 104) 00:08:04.330 18551.729 - 18652.554: 65.8070% ( 91) 00:08:04.330 18652.554 - 18753.378: 67.1875% ( 91) 00:08:04.330 18753.378 - 18854.203: 68.4163% ( 81) 00:08:04.330 18854.203 - 18955.028: 69.7967% ( 91) 00:08:04.330 18955.028 - 19055.852: 71.1013% ( 86) 00:08:04.330 19055.852 - 19156.677: 72.8459% ( 115) 00:08:04.330 19156.677 - 19257.502: 74.3022% ( 96) 00:08:04.330 19257.502 - 19358.326: 75.5765% ( 84) 00:08:04.330 19358.326 - 19459.151: 76.9569% ( 91) 00:08:04.330 19459.151 - 19559.975: 78.1705% ( 80) 00:08:04.330 19559.975 - 19660.800: 79.2627% ( 72) 00:08:04.330 19660.800 - 19761.625: 80.3701% ( 73) 00:08:04.330 19761.625 - 19862.449: 81.2955% ( 61) 00:08:04.330 19862.449 - 19963.274: 82.2967% ( 66) 00:08:04.330 19963.274 - 20064.098: 83.3434% ( 69) 00:08:04.330 20064.098 - 20164.923: 84.3598% ( 67) 00:08:04.330 20164.923 - 20265.748: 85.2549% ( 59) 00:08:04.330 20265.748 - 20366.572: 86.1650% ( 60) 00:08:04.330 20366.572 - 20467.397: 86.9691% ( 53) 00:08:04.330 20467.397 - 20568.222: 87.8641% ( 59) 00:08:04.330 20568.222 - 20669.046: 88.7288% ( 57) 00:08:04.330 20669.046 - 20769.871: 89.4114% ( 45) 00:08:04.330 20769.871 - 20870.695: 90.0789% ( 44) 00:08:04.330 20870.695 - 20971.520: 90.6250% ( 36) 00:08:04.330 20971.520 - 21072.345: 91.0649% ( 29) 00:08:04.330 21072.345 - 21173.169: 91.4593% ( 26) 00:08:04.330 21173.169 - 21273.994: 91.8993% ( 29) 00:08:04.330 21273.994 - 21374.818: 92.3847% ( 32) 00:08:04.330 21374.818 - 21475.643: 92.8095% ( 28) 00:08:04.330 21475.643 - 21576.468: 93.1129% ( 20) 00:08:04.330 21576.468 - 21677.292: 93.5225% ( 27) 00:08:04.330 21677.292 - 21778.117: 93.6893% ( 11) 00:08:04.330 21778.117 - 21878.942: 93.8562% ( 11) 00:08:04.330 21878.942 - 21979.766: 94.0837% ( 15) 00:08:04.330 21979.766 - 22080.591: 94.2506% ( 11) 00:08:04.330 22080.591 - 22181.415: 94.3416% ( 6) 00:08:04.330 22181.415 - 22282.240: 94.5085% ( 11) 00:08:04.330 22282.240 - 22383.065: 94.6602% ( 10) 00:08:04.330 22383.065 - 22483.889: 94.7816% ( 8) 00:08:04.330 22483.889 - 22584.714: 94.8877% ( 7) 00:08:04.330 22584.714 - 22685.538: 94.9636% ( 5) 00:08:04.330 22685.538 - 22786.363: 95.0394% ( 5) 00:08:04.330 22786.363 - 22887.188: 95.1001% ( 4) 00:08:04.330 22887.188 - 22988.012: 95.1456% ( 3) 00:08:04.330 23492.135 - 23592.960: 95.2367% ( 6) 00:08:04.330 23592.960 - 23693.785: 95.5552% ( 21) 00:08:04.330 23693.785 - 23794.609: 95.6311% ( 5) 00:08:04.330 23794.609 - 23895.434: 95.6766% ( 3) 00:08:04.330 24197.908 - 24298.732: 95.9041% ( 15) 00:08:04.330 27020.997 - 27222.646: 95.9193% ( 1) 00:08:04.330 27222.646 - 27424.295: 95.9800% ( 4) 00:08:04.330 27424.295 - 27625.945: 96.0255% ( 3) 00:08:04.330 27625.945 - 27827.594: 96.1013% ( 5) 00:08:04.330 27827.594 - 28029.243: 96.1165% ( 1) 00:08:04.330 31457.280 - 31658.929: 96.1772% ( 4) 00:08:04.330 31658.929 - 31860.578: 96.2834% ( 7) 00:08:04.330 31860.578 - 32062.228: 96.3441% ( 4) 00:08:04.330 32062.228 - 32263.877: 96.3592% ( 1) 00:08:04.330 32263.877 - 32465.526: 96.4654% ( 7) 00:08:04.330 32465.526 - 32667.175: 96.5413% ( 5) 00:08:04.330 32667.175 - 32868.825: 96.6323% ( 6) 00:08:04.330 32868.825 - 33070.474: 96.7081% ( 5) 00:08:04.330 33070.474 - 33272.123: 96.7840% ( 5) 00:08:04.330 33272.123 - 33473.772: 96.8598% ( 5) 00:08:04.330 33473.772 - 33675.422: 96.9508% ( 6) 00:08:04.330 33675.422 - 33877.071: 97.0267% ( 5) 00:08:04.330 33877.071 - 34078.720: 97.0874% ( 4) 00:08:04.330 42547.988 - 42749.637: 97.1177% ( 2) 00:08:04.330 42749.637 - 42951.286: 97.1632% ( 3) 00:08:04.330 43152.935 - 43354.585: 97.1784% ( 1) 00:08:04.330 43354.585 - 43556.234: 97.1936% ( 1) 00:08:04.330 43556.234 - 43757.883: 97.2542% ( 4) 00:08:04.330 43757.883 - 43959.532: 97.2846% ( 2) 00:08:04.330 43959.532 - 44161.182: 97.3149% ( 2) 00:08:04.330 44161.182 - 44362.831: 97.3756% ( 4) 00:08:04.330 44362.831 - 44564.480: 97.4818% ( 7) 00:08:04.330 44564.480 - 44766.129: 97.5576% ( 5) 00:08:04.330 44766.129 - 44967.778: 97.6183% ( 4) 00:08:04.330 44967.778 - 45169.428: 97.7245% ( 7) 00:08:04.330 45169.428 - 45371.077: 97.7852% ( 4) 00:08:04.330 45371.077 - 45572.726: 97.8914% ( 7) 00:08:04.330 45572.726 - 45774.375: 97.9672% ( 5) 00:08:04.330 45774.375 - 45976.025: 98.0431% ( 5) 00:08:04.330 45976.025 - 46177.674: 98.0583% ( 1) 00:08:04.330 88725.662 - 89128.960: 98.1796% ( 8) 00:08:04.330 89128.960 - 89532.258: 98.5285% ( 23) 00:08:04.330 89532.258 - 89935.557: 98.8319% ( 20) 00:08:04.330 89935.557 - 90338.855: 98.9684% ( 9) 00:08:04.330 90338.855 - 90742.154: 99.0291% ( 4) 00:08:04.330 95178.437 - 95581.735: 99.2870% ( 17) 00:08:04.330 95581.735 - 95985.034: 99.7573% ( 31) 00:08:04.330 95985.034 - 96388.332: 99.9242% ( 11) 00:08:04.330 96388.332 - 96791.631: 100.0000% ( 5) 00:08:04.330 00:08:04.330 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:04.330 ============================================================================== 00:08:04.330 Range in us Cumulative IO count 00:08:04.330 9931.225 - 9981.637: 0.0455% ( 3) 00:08:04.330 9981.637 - 10032.049: 0.0910% ( 3) 00:08:04.330 10032.049 - 10082.462: 0.1517% ( 4) 00:08:04.330 10082.462 - 10132.874: 0.1972% ( 3) 00:08:04.330 10132.874 - 10183.286: 0.2579% ( 4) 00:08:04.330 10183.286 - 10233.698: 0.3186% ( 4) 00:08:04.330 10233.698 - 10284.111: 0.3792% ( 4) 00:08:04.330 10284.111 - 10334.523: 0.5309% ( 10) 00:08:04.330 10334.523 - 10384.935: 0.5613% ( 2) 00:08:04.330 10384.935 - 10435.348: 0.5916% ( 2) 00:08:04.331 10435.348 - 10485.760: 0.6220% ( 2) 00:08:04.331 10485.760 - 10536.172: 0.6523% ( 2) 00:08:04.331 10536.172 - 10586.585: 0.6826% ( 2) 00:08:04.331 10586.585 - 10636.997: 0.7130% ( 2) 00:08:04.331 10636.997 - 10687.409: 0.7433% ( 2) 00:08:04.331 10737.822 - 10788.234: 0.7737% ( 2) 00:08:04.331 10788.234 - 10838.646: 0.8040% ( 2) 00:08:04.331 10838.646 - 10889.058: 0.8192% ( 1) 00:08:04.331 10889.058 - 10939.471: 0.8495% ( 2) 00:08:04.331 10939.471 - 10989.883: 0.8799% ( 2) 00:08:04.331 10989.883 - 11040.295: 0.9254% ( 3) 00:08:04.331 11040.295 - 11090.708: 0.9860% ( 4) 00:08:04.331 11090.708 - 11141.120: 1.0619% ( 5) 00:08:04.331 11141.120 - 11191.532: 1.1226% ( 4) 00:08:04.331 11191.532 - 11241.945: 1.1833% ( 4) 00:08:04.331 11241.945 - 11292.357: 1.2894% ( 7) 00:08:04.331 11292.357 - 11342.769: 1.4715% ( 12) 00:08:04.331 11342.769 - 11393.182: 1.5170% ( 3) 00:08:04.331 11393.182 - 11443.594: 1.5473% ( 2) 00:08:04.331 11443.594 - 11494.006: 1.5777% ( 2) 00:08:04.331 11494.006 - 11544.418: 1.6232% ( 3) 00:08:04.331 11544.418 - 11594.831: 1.6839% ( 4) 00:08:04.331 11594.831 - 11645.243: 1.7142% ( 2) 00:08:04.331 11645.243 - 11695.655: 1.7597% ( 3) 00:08:04.331 11695.655 - 11746.068: 1.8052% ( 3) 00:08:04.331 11746.068 - 11796.480: 1.8507% ( 3) 00:08:04.331 11796.480 - 11846.892: 1.8962% ( 3) 00:08:04.331 11846.892 - 11897.305: 1.9266% ( 2) 00:08:04.331 11897.305 - 11947.717: 1.9417% ( 1) 00:08:04.331 13712.148 - 13812.972: 1.9873% ( 3) 00:08:04.331 13812.972 - 13913.797: 2.0783% ( 6) 00:08:04.331 13913.797 - 14014.622: 2.1390% ( 4) 00:08:04.331 14014.622 - 14115.446: 2.2148% ( 5) 00:08:04.331 14115.446 - 14216.271: 2.3362% ( 8) 00:08:04.331 14216.271 - 14317.095: 2.6851% ( 23) 00:08:04.331 14317.095 - 14417.920: 2.7913% ( 7) 00:08:04.331 14417.920 - 14518.745: 2.9581% ( 11) 00:08:04.331 14518.745 - 14619.569: 3.2008% ( 16) 00:08:04.331 14619.569 - 14720.394: 3.5498% ( 23) 00:08:04.331 14720.394 - 14821.218: 4.0504% ( 33) 00:08:04.331 14821.218 - 14922.043: 4.2931% ( 16) 00:08:04.331 14922.043 - 15022.868: 4.6572% ( 24) 00:08:04.331 15022.868 - 15123.692: 5.2488% ( 39) 00:08:04.331 15123.692 - 15224.517: 5.9466% ( 46) 00:08:04.331 15224.517 - 15325.342: 6.6444% ( 46) 00:08:04.331 15325.342 - 15426.166: 7.7215% ( 71) 00:08:04.331 15426.166 - 15526.991: 8.7834% ( 70) 00:08:04.331 15526.991 - 15627.815: 10.1032% ( 87) 00:08:04.331 15627.815 - 15728.640: 11.9691% ( 123) 00:08:04.331 15728.640 - 15829.465: 14.0170% ( 135) 00:08:04.331 15829.465 - 15930.289: 16.5655% ( 168) 00:08:04.331 15930.289 - 16031.114: 19.2961% ( 180) 00:08:04.331 16031.114 - 16131.938: 21.8598% ( 169) 00:08:04.331 16131.938 - 16232.763: 24.2870% ( 160) 00:08:04.331 16232.763 - 16333.588: 26.1529% ( 123) 00:08:04.331 16333.588 - 16434.412: 27.8823% ( 114) 00:08:04.331 16434.412 - 16535.237: 29.5510% ( 110) 00:08:04.331 16535.237 - 16636.062: 31.7658% ( 146) 00:08:04.331 16636.062 - 16736.886: 33.4345% ( 110) 00:08:04.331 16736.886 - 16837.711: 35.0728% ( 108) 00:08:04.331 16837.711 - 16938.535: 36.9387% ( 123) 00:08:04.331 16938.535 - 17039.360: 38.7136% ( 117) 00:08:04.331 17039.360 - 17140.185: 40.1396% ( 94) 00:08:04.331 17140.185 - 17241.009: 41.3228% ( 78) 00:08:04.331 17241.009 - 17341.834: 42.9612% ( 108) 00:08:04.331 17341.834 - 17442.658: 44.6602% ( 112) 00:08:04.331 17442.658 - 17543.483: 46.7233% ( 136) 00:08:04.331 17543.483 - 17644.308: 49.0291% ( 152) 00:08:04.331 17644.308 - 17745.132: 51.3956% ( 156) 00:08:04.331 17745.132 - 17845.957: 53.2615% ( 123) 00:08:04.331 17845.957 - 17946.782: 54.9909% ( 114) 00:08:04.331 17946.782 - 18047.606: 56.6596% ( 110) 00:08:04.331 18047.606 - 18148.431: 58.4041% ( 115) 00:08:04.331 18148.431 - 18249.255: 60.1790% ( 117) 00:08:04.331 18249.255 - 18350.080: 61.5291% ( 89) 00:08:04.331 18350.080 - 18450.905: 62.6820% ( 76) 00:08:04.331 18450.905 - 18551.729: 64.0625% ( 91) 00:08:04.331 18551.729 - 18652.554: 65.9739% ( 126) 00:08:04.331 18652.554 - 18753.378: 67.6729% ( 112) 00:08:04.331 18753.378 - 18854.203: 69.5692% ( 125) 00:08:04.331 18854.203 - 18955.028: 71.8447% ( 150) 00:08:04.331 18955.028 - 19055.852: 73.3920% ( 102) 00:08:04.331 19055.852 - 19156.677: 74.7421% ( 89) 00:08:04.331 19156.677 - 19257.502: 76.2591% ( 100) 00:08:04.331 19257.502 - 19358.326: 77.7002% ( 95) 00:08:04.331 19358.326 - 19459.151: 79.0504% ( 89) 00:08:04.331 19459.151 - 19559.975: 80.3246% ( 84) 00:08:04.331 19559.975 - 19660.800: 81.4169% ( 72) 00:08:04.331 19660.800 - 19761.625: 82.4484% ( 68) 00:08:04.331 19761.625 - 19862.449: 83.3434% ( 59) 00:08:04.331 19862.449 - 19963.274: 84.0564% ( 47) 00:08:04.331 19963.274 - 20064.098: 84.7694% ( 47) 00:08:04.331 20064.098 - 20164.923: 85.4369% ( 44) 00:08:04.331 20164.923 - 20265.748: 86.2561% ( 54) 00:08:04.331 20265.748 - 20366.572: 87.0601% ( 53) 00:08:04.331 20366.572 - 20467.397: 88.1826% ( 74) 00:08:04.331 20467.397 - 20568.222: 89.2445% ( 70) 00:08:04.331 20568.222 - 20669.046: 90.1547% ( 60) 00:08:04.331 20669.046 - 20769.871: 90.6402% ( 32) 00:08:04.331 20769.871 - 20870.695: 90.9739% ( 22) 00:08:04.331 20870.695 - 20971.520: 91.3987% ( 28) 00:08:04.331 20971.520 - 21072.345: 91.7172% ( 21) 00:08:04.331 21072.345 - 21173.169: 91.9600% ( 16) 00:08:04.331 21173.169 - 21273.994: 92.2785% ( 21) 00:08:04.331 21273.994 - 21374.818: 92.6578% ( 25) 00:08:04.331 21374.818 - 21475.643: 92.9460% ( 19) 00:08:04.331 21475.643 - 21576.468: 93.0977% ( 10) 00:08:04.331 21576.468 - 21677.292: 93.1584% ( 4) 00:08:04.331 21677.292 - 21778.117: 93.2191% ( 4) 00:08:04.331 21778.117 - 21878.942: 93.2797% ( 4) 00:08:04.331 21878.942 - 21979.766: 93.3404% ( 4) 00:08:04.331 21979.766 - 22080.591: 93.5376% ( 13) 00:08:04.331 22080.591 - 22181.415: 93.8107% ( 18) 00:08:04.331 22181.415 - 22282.240: 94.1899% ( 25) 00:08:04.331 22282.240 - 22383.065: 94.4630% ( 18) 00:08:04.331 22383.065 - 22483.889: 94.5540% ( 6) 00:08:04.331 22483.889 - 22584.714: 94.7360% ( 12) 00:08:04.331 22584.714 - 22685.538: 94.8574% ( 8) 00:08:04.331 22685.538 - 22786.363: 94.9333% ( 5) 00:08:04.331 22786.363 - 22887.188: 95.0091% ( 5) 00:08:04.331 22887.188 - 22988.012: 95.0698% ( 4) 00:08:04.331 22988.012 - 23088.837: 95.1153% ( 3) 00:08:04.331 23088.837 - 23189.662: 95.1456% ( 2) 00:08:04.331 24500.382 - 24601.206: 95.1760% ( 2) 00:08:04.331 24702.031 - 24802.855: 95.2518% ( 5) 00:08:04.331 24802.855 - 24903.680: 95.6917% ( 29) 00:08:04.331 24903.680 - 25004.505: 95.9800% ( 19) 00:08:04.331 25004.505 - 25105.329: 96.0255% ( 3) 00:08:04.331 25105.329 - 25206.154: 96.0558% ( 2) 00:08:04.331 25206.154 - 25306.978: 96.1165% ( 4) 00:08:04.331 31860.578 - 32062.228: 96.1772% ( 4) 00:08:04.331 32062.228 - 32263.877: 96.2530% ( 5) 00:08:04.331 32263.877 - 32465.526: 96.3289% ( 5) 00:08:04.331 32465.526 - 32667.175: 96.4199% ( 6) 00:08:04.331 32667.175 - 32868.825: 96.4958% ( 5) 00:08:04.331 32868.825 - 33070.474: 96.5564% ( 4) 00:08:04.331 33070.474 - 33272.123: 96.6475% ( 6) 00:08:04.331 33272.123 - 33473.772: 96.7385% ( 6) 00:08:04.331 33473.772 - 33675.422: 96.8295% ( 6) 00:08:04.331 33675.422 - 33877.071: 96.9205% ( 6) 00:08:04.331 33877.071 - 34078.720: 97.0115% ( 6) 00:08:04.331 34078.720 - 34280.369: 97.0874% ( 5) 00:08:04.331 42346.338 - 42547.988: 97.1481% ( 4) 00:08:04.331 42547.988 - 42749.637: 97.2239% ( 5) 00:08:04.331 42749.637 - 42951.286: 97.2998% ( 5) 00:08:04.331 42951.286 - 43152.935: 97.3908% ( 6) 00:08:04.331 43152.935 - 43354.585: 97.4818% ( 6) 00:08:04.331 43354.585 - 43556.234: 97.5728% ( 6) 00:08:04.331 43556.234 - 43757.883: 97.6790% ( 7) 00:08:04.331 43757.883 - 43959.532: 97.7700% ( 6) 00:08:04.331 43959.532 - 44161.182: 97.8762% ( 7) 00:08:04.331 44161.182 - 44362.831: 97.9672% ( 6) 00:08:04.331 44362.831 - 44564.480: 98.0583% ( 6) 00:08:04.331 89532.258 - 89935.557: 98.1189% ( 4) 00:08:04.331 89935.557 - 90338.855: 98.5285% ( 27) 00:08:04.331 90338.855 - 90742.154: 98.9533% ( 28) 00:08:04.331 90742.154 - 91145.452: 99.0291% ( 5) 00:08:04.331 95178.437 - 95581.735: 99.0443% ( 1) 00:08:04.331 95581.735 - 95985.034: 100.0000% ( 63) 00:08:04.331 00:08:04.331 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:04.331 ============================================================================== 00:08:04.331 Range in us Cumulative IO count 00:08:04.331 8670.917 - 8721.329: 0.0456% ( 3) 00:08:04.331 8721.329 - 8771.742: 0.0608% ( 1) 00:08:04.331 8771.742 - 8822.154: 0.0760% ( 1) 00:08:04.331 8973.391 - 9023.803: 0.1216% ( 3) 00:08:04.331 9023.803 - 9074.215: 0.1671% ( 3) 00:08:04.331 9074.215 - 9124.628: 0.1975% ( 2) 00:08:04.331 9124.628 - 9175.040: 0.2279% ( 2) 00:08:04.331 9175.040 - 9225.452: 0.2735% ( 3) 00:08:04.331 9225.452 - 9275.865: 0.3343% ( 4) 00:08:04.332 9275.865 - 9326.277: 0.4255% ( 6) 00:08:04.332 9326.277 - 9376.689: 0.5014% ( 5) 00:08:04.332 9376.689 - 9427.102: 0.5470% ( 3) 00:08:04.332 9427.102 - 9477.514: 0.6382% ( 6) 00:08:04.332 9477.514 - 9527.926: 0.7750% ( 9) 00:08:04.332 9527.926 - 9578.338: 0.9117% ( 9) 00:08:04.332 9578.338 - 9628.751: 1.1093% ( 13) 00:08:04.332 9628.751 - 9679.163: 1.2916% ( 12) 00:08:04.332 9679.163 - 9729.575: 1.3220% ( 2) 00:08:04.332 9729.575 - 9779.988: 1.3372% ( 1) 00:08:04.332 9779.988 - 9830.400: 1.3524% ( 1) 00:08:04.332 9830.400 - 9880.812: 1.3980% ( 3) 00:08:04.332 9880.812 - 9931.225: 1.4284% ( 2) 00:08:04.332 9931.225 - 9981.637: 1.4587% ( 2) 00:08:04.332 9981.637 - 10032.049: 1.4891% ( 2) 00:08:04.332 10032.049 - 10082.462: 1.5347% ( 3) 00:08:04.332 10082.462 - 10132.874: 1.5651% ( 2) 00:08:04.332 10132.874 - 10183.286: 1.5955% ( 2) 00:08:04.332 10183.286 - 10233.698: 1.6107% ( 1) 00:08:04.332 10233.698 - 10284.111: 1.6411% ( 2) 00:08:04.332 10334.523 - 10384.935: 1.6715% ( 2) 00:08:04.332 10384.935 - 10435.348: 1.7171% ( 3) 00:08:04.332 10838.646 - 10889.058: 1.7475% ( 2) 00:08:04.332 10889.058 - 10939.471: 1.7778% ( 2) 00:08:04.332 12149.366 - 12199.778: 1.7930% ( 1) 00:08:04.332 12199.778 - 12250.191: 1.8082% ( 1) 00:08:04.332 12250.191 - 12300.603: 1.8234% ( 1) 00:08:04.332 12300.603 - 12351.015: 1.8538% ( 2) 00:08:04.332 12351.015 - 12401.428: 1.8842% ( 2) 00:08:04.332 12401.428 - 12451.840: 1.9298% ( 3) 00:08:04.332 12451.840 - 12502.252: 1.9754% ( 3) 00:08:04.332 12502.252 - 12552.665: 1.9906% ( 1) 00:08:04.332 12552.665 - 12603.077: 2.0210% ( 2) 00:08:04.332 12603.077 - 12653.489: 2.0666% ( 3) 00:08:04.332 12653.489 - 12703.902: 2.0818% ( 1) 00:08:04.332 12703.902 - 12754.314: 2.1121% ( 2) 00:08:04.332 12754.314 - 12804.726: 2.1425% ( 2) 00:08:04.332 12804.726 - 12855.138: 2.3249% ( 12) 00:08:04.332 12855.138 - 12905.551: 2.5072% ( 12) 00:08:04.332 12905.551 - 13006.375: 2.6136% ( 7) 00:08:04.332 13006.375 - 13107.200: 2.6744% ( 4) 00:08:04.332 13107.200 - 13208.025: 2.7200% ( 3) 00:08:04.332 13208.025 - 13308.849: 2.7503% ( 2) 00:08:04.332 13913.797 - 14014.622: 2.7655% ( 1) 00:08:04.332 14014.622 - 14115.446: 2.8871% ( 8) 00:08:04.332 14115.446 - 14216.271: 3.0087% ( 8) 00:08:04.332 14216.271 - 14317.095: 3.2670% ( 17) 00:08:04.332 14317.095 - 14417.920: 3.6621% ( 26) 00:08:04.332 14417.920 - 14518.745: 4.2395% ( 38) 00:08:04.332 14518.745 - 14619.569: 4.6042% ( 24) 00:08:04.332 14619.569 - 14720.394: 5.1056% ( 33) 00:08:04.332 14720.394 - 14821.218: 5.7134% ( 40) 00:08:04.332 14821.218 - 14922.043: 6.8835% ( 77) 00:08:04.332 14922.043 - 15022.868: 7.9927% ( 73) 00:08:04.332 15022.868 - 15123.692: 8.9044% ( 60) 00:08:04.332 15123.692 - 15224.517: 9.8009% ( 59) 00:08:04.332 15224.517 - 15325.342: 10.8494% ( 69) 00:08:04.332 15325.342 - 15426.166: 12.0043% ( 76) 00:08:04.332 15426.166 - 15526.991: 13.6909% ( 111) 00:08:04.332 15526.991 - 15627.815: 15.2712% ( 104) 00:08:04.332 15627.815 - 15728.640: 16.9275% ( 109) 00:08:04.332 15728.640 - 15829.465: 18.5534% ( 107) 00:08:04.332 15829.465 - 15930.289: 20.4832% ( 127) 00:08:04.332 15930.289 - 16031.114: 22.0483% ( 103) 00:08:04.332 16031.114 - 16131.938: 23.7046% ( 109) 00:08:04.332 16131.938 - 16232.763: 25.4217% ( 113) 00:08:04.332 16232.763 - 16333.588: 27.0628% ( 108) 00:08:04.332 16333.588 - 16434.412: 29.1445% ( 137) 00:08:04.332 16434.412 - 16535.237: 31.1655% ( 133) 00:08:04.332 16535.237 - 16636.062: 32.8218% ( 109) 00:08:04.332 16636.062 - 16736.886: 34.1286% ( 86) 00:08:04.332 16736.886 - 16837.711: 35.4050% ( 84) 00:08:04.332 16837.711 - 16938.535: 37.0460% ( 108) 00:08:04.332 16938.535 - 17039.360: 38.7631% ( 113) 00:08:04.332 17039.360 - 17140.185: 40.3586% ( 105) 00:08:04.332 17140.185 - 17241.009: 41.8477% ( 98) 00:08:04.332 17241.009 - 17341.834: 43.7623% ( 126) 00:08:04.332 17341.834 - 17442.658: 45.8897% ( 140) 00:08:04.332 17442.658 - 17543.483: 47.5004% ( 106) 00:08:04.332 17543.483 - 17644.308: 49.2478% ( 115) 00:08:04.332 17644.308 - 17745.132: 50.7370% ( 98) 00:08:04.332 17745.132 - 17845.957: 52.1957% ( 96) 00:08:04.332 17845.957 - 17946.782: 54.0647% ( 123) 00:08:04.332 17946.782 - 18047.606: 56.0097% ( 128) 00:08:04.332 18047.606 - 18148.431: 57.7572% ( 115) 00:08:04.332 18148.431 - 18249.255: 59.5046% ( 115) 00:08:04.332 18249.255 - 18350.080: 61.1609% ( 109) 00:08:04.332 18350.080 - 18450.905: 62.8172% ( 109) 00:08:04.332 18450.905 - 18551.729: 64.5039% ( 111) 00:08:04.332 18551.729 - 18652.554: 66.2969% ( 118) 00:08:04.332 18652.554 - 18753.378: 67.6189% ( 87) 00:08:04.332 18753.378 - 18854.203: 68.9561% ( 88) 00:08:04.332 18854.203 - 18955.028: 70.7339% ( 117) 00:08:04.332 18955.028 - 19055.852: 72.4510% ( 113) 00:08:04.332 19055.852 - 19156.677: 73.9249% ( 97) 00:08:04.332 19156.677 - 19257.502: 75.4293% ( 99) 00:08:04.332 19257.502 - 19358.326: 76.6297% ( 79) 00:08:04.332 19358.326 - 19459.151: 77.8149% ( 78) 00:08:04.332 19459.151 - 19559.975: 78.8938% ( 71) 00:08:04.332 19559.975 - 19660.800: 80.0486% ( 76) 00:08:04.332 19660.800 - 19761.625: 81.2339% ( 78) 00:08:04.332 19761.625 - 19862.449: 82.2823% ( 69) 00:08:04.332 19862.449 - 19963.274: 83.3460% ( 70) 00:08:04.332 19963.274 - 20064.098: 84.4249% ( 71) 00:08:04.332 20064.098 - 20164.923: 85.2454% ( 54) 00:08:04.332 20164.923 - 20265.748: 86.0204% ( 51) 00:08:04.332 20265.748 - 20366.572: 87.0992% ( 71) 00:08:04.332 20366.572 - 20467.397: 88.0261% ( 61) 00:08:04.332 20467.397 - 20568.222: 88.5732% ( 36) 00:08:04.332 20568.222 - 20669.046: 89.3177% ( 49) 00:08:04.332 20669.046 - 20769.871: 89.9711% ( 43) 00:08:04.332 20769.871 - 20870.695: 90.4878% ( 34) 00:08:04.332 20870.695 - 20971.520: 90.9740% ( 32) 00:08:04.332 20971.520 - 21072.345: 91.3843% ( 27) 00:08:04.332 21072.345 - 21173.169: 91.8250% ( 29) 00:08:04.332 21173.169 - 21273.994: 92.1592% ( 22) 00:08:04.332 21273.994 - 21374.818: 92.4632% ( 20) 00:08:04.332 21374.818 - 21475.643: 92.7974% ( 22) 00:08:04.332 21475.643 - 21576.468: 93.0254% ( 15) 00:08:04.332 21576.468 - 21677.292: 93.2533% ( 15) 00:08:04.332 21677.292 - 21778.117: 93.5572% ( 20) 00:08:04.332 21778.117 - 21878.942: 93.6940% ( 9) 00:08:04.332 21878.942 - 21979.766: 93.8459% ( 10) 00:08:04.332 21979.766 - 22080.591: 94.0283% ( 12) 00:08:04.332 22080.591 - 22181.415: 94.1954% ( 11) 00:08:04.332 22181.415 - 22282.240: 94.3474% ( 10) 00:08:04.332 22282.240 - 22383.065: 94.4689% ( 8) 00:08:04.332 22383.065 - 22483.889: 94.5753% ( 7) 00:08:04.332 22483.889 - 22584.714: 94.6665% ( 6) 00:08:04.332 22584.714 - 22685.538: 94.7272% ( 4) 00:08:04.332 22685.538 - 22786.363: 94.7728% ( 3) 00:08:04.332 22786.363 - 22887.188: 94.8184% ( 3) 00:08:04.332 22887.188 - 22988.012: 94.8792% ( 4) 00:08:04.332 22988.012 - 23088.837: 94.9400% ( 4) 00:08:04.332 23088.837 - 23189.662: 94.9856% ( 3) 00:08:04.332 23189.662 - 23290.486: 95.0312% ( 3) 00:08:04.332 23290.486 - 23391.311: 95.0615% ( 2) 00:08:04.332 25105.329 - 25206.154: 95.1223% ( 4) 00:08:04.332 25206.154 - 25306.978: 95.1375% ( 1) 00:08:04.332 25811.102 - 26012.751: 95.1831% ( 3) 00:08:04.332 26012.751 - 26214.400: 95.1983% ( 1) 00:08:04.332 27222.646 - 27424.295: 95.3047% ( 7) 00:08:04.332 27424.295 - 27625.945: 95.8061% ( 33) 00:08:04.332 27625.945 - 27827.594: 95.9429% ( 9) 00:08:04.332 30449.034 - 30650.683: 95.9581% ( 1) 00:08:04.332 30650.683 - 30852.332: 96.0036% ( 3) 00:08:04.332 30852.332 - 31053.982: 96.0188% ( 1) 00:08:04.332 32263.877 - 32465.526: 96.0340% ( 1) 00:08:04.332 32465.526 - 32667.175: 96.1252% ( 6) 00:08:04.332 32667.175 - 32868.825: 96.2164% ( 6) 00:08:04.332 32868.825 - 33070.474: 96.3531% ( 9) 00:08:04.332 33070.474 - 33272.123: 96.4747% ( 8) 00:08:04.332 33272.123 - 33473.772: 96.5507% ( 5) 00:08:04.332 33473.772 - 33675.422: 96.6267% ( 5) 00:08:04.332 33675.422 - 33877.071: 96.6874% ( 4) 00:08:04.332 33877.071 - 34078.720: 96.7634% ( 5) 00:08:04.332 34078.720 - 34280.369: 96.8242% ( 4) 00:08:04.332 34280.369 - 34482.018: 96.9002% ( 5) 00:08:04.332 34482.018 - 34683.668: 96.9458% ( 3) 00:08:04.332 34683.668 - 34885.317: 96.9913% ( 3) 00:08:04.332 35893.563 - 36095.212: 97.0521% ( 4) 00:08:04.332 36095.212 - 36296.862: 97.0825% ( 2) 00:08:04.332 41539.742 - 41741.391: 97.0977% ( 1) 00:08:04.332 41741.391 - 41943.040: 97.2952% ( 13) 00:08:04.332 41943.040 - 42144.689: 97.4016% ( 7) 00:08:04.332 42951.286 - 43152.935: 97.4320% ( 2) 00:08:04.332 43152.935 - 43354.585: 97.5080% ( 5) 00:08:04.332 43354.585 - 43556.234: 97.5840% ( 5) 00:08:04.332 43556.234 - 43757.883: 97.6903% ( 7) 00:08:04.332 43757.883 - 43959.532: 97.7815% ( 6) 00:08:04.332 43959.532 - 44161.182: 97.8879% ( 7) 00:08:04.332 44161.182 - 44362.831: 97.9638% ( 5) 00:08:04.332 44362.831 - 44564.480: 98.0550% ( 6) 00:08:04.332 91548.751 - 91952.049: 98.3437% ( 19) 00:08:04.332 91952.049 - 92355.348: 98.6172% ( 18) 00:08:04.332 92355.348 - 92758.646: 98.9059% ( 19) 00:08:04.332 92758.646 - 93161.945: 99.0275% ( 8) 00:08:04.332 97194.929 - 97598.228: 99.6353% ( 40) 00:08:04.332 97598.228 - 98001.526: 100.0000% ( 24) 00:08:04.332 00:08:04.332 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:04.332 ============================================================================== 00:08:04.333 Range in us Cumulative IO count 00:08:04.333 7965.145 - 8015.557: 0.0307% ( 2) 00:08:04.333 8015.557 - 8065.969: 0.0767% ( 3) 00:08:04.333 8065.969 - 8116.382: 0.1073% ( 2) 00:08:04.333 8116.382 - 8166.794: 0.1533% ( 3) 00:08:04.333 8166.794 - 8217.206: 0.1840% ( 2) 00:08:04.333 8217.206 - 8267.618: 0.2147% ( 2) 00:08:04.333 8267.618 - 8318.031: 0.2607% ( 3) 00:08:04.333 8318.031 - 8368.443: 0.2913% ( 2) 00:08:04.333 8368.443 - 8418.855: 0.3373% ( 3) 00:08:04.333 8418.855 - 8469.268: 0.3527% ( 1) 00:08:04.333 8469.268 - 8519.680: 0.3833% ( 2) 00:08:04.333 8519.680 - 8570.092: 0.4293% ( 3) 00:08:04.333 8570.092 - 8620.505: 0.4600% ( 2) 00:08:04.333 8620.505 - 8670.917: 0.5060% ( 3) 00:08:04.333 8670.917 - 8721.329: 0.5520% ( 3) 00:08:04.333 8721.329 - 8771.742: 0.5826% ( 2) 00:08:04.333 8771.742 - 8822.154: 0.6286% ( 3) 00:08:04.333 8822.154 - 8872.566: 0.6746% ( 3) 00:08:04.333 8872.566 - 8922.978: 0.7053% ( 2) 00:08:04.333 8922.978 - 8973.391: 0.7206% ( 1) 00:08:04.333 9427.102 - 9477.514: 0.7513% ( 2) 00:08:04.333 9527.926 - 9578.338: 0.7820% ( 2) 00:08:04.333 9578.338 - 9628.751: 0.7973% ( 1) 00:08:04.333 12048.542 - 12098.954: 0.8126% ( 1) 00:08:04.333 12098.954 - 12149.366: 0.8433% ( 2) 00:08:04.333 12149.366 - 12199.778: 0.9046% ( 4) 00:08:04.333 12199.778 - 12250.191: 0.9353% ( 2) 00:08:04.333 12250.191 - 12300.603: 0.9506% ( 1) 00:08:04.333 12300.603 - 12351.015: 0.9966% ( 3) 00:08:04.333 12351.015 - 12401.428: 1.0273% ( 2) 00:08:04.333 12401.428 - 12451.840: 1.0580% ( 2) 00:08:04.333 12451.840 - 12502.252: 1.1040% ( 3) 00:08:04.333 12502.252 - 12552.665: 1.1500% ( 3) 00:08:04.333 12552.665 - 12603.077: 1.1653% ( 1) 00:08:04.333 12603.077 - 12653.489: 1.2113% ( 3) 00:08:04.333 12653.489 - 12703.902: 1.2420% ( 2) 00:08:04.333 12703.902 - 12754.314: 1.2879% ( 3) 00:08:04.333 12754.314 - 12804.726: 1.5026% ( 14) 00:08:04.333 12804.726 - 12855.138: 1.5639% ( 4) 00:08:04.333 12855.138 - 12905.551: 1.6099% ( 3) 00:08:04.333 12905.551 - 13006.375: 1.7173% ( 7) 00:08:04.333 13006.375 - 13107.200: 1.9166% ( 13) 00:08:04.333 13107.200 - 13208.025: 2.0546% ( 9) 00:08:04.333 13208.025 - 13308.849: 2.1772% ( 8) 00:08:04.333 13308.849 - 13409.674: 2.2999% ( 8) 00:08:04.333 13409.674 - 13510.498: 2.5452% ( 16) 00:08:04.333 13510.498 - 13611.323: 2.7752% ( 15) 00:08:04.333 13611.323 - 13712.148: 3.0512% ( 18) 00:08:04.333 13712.148 - 13812.972: 3.3272% ( 18) 00:08:04.333 13812.972 - 13913.797: 3.5265% ( 13) 00:08:04.333 13913.797 - 14014.622: 3.6339% ( 7) 00:08:04.333 14014.622 - 14115.446: 3.7105% ( 5) 00:08:04.333 14115.446 - 14216.271: 3.7412% ( 2) 00:08:04.333 14216.271 - 14317.095: 3.7718% ( 2) 00:08:04.333 14317.095 - 14417.920: 3.7872% ( 1) 00:08:04.333 14417.920 - 14518.745: 3.9558% ( 11) 00:08:04.333 14518.745 - 14619.569: 4.2625% ( 20) 00:08:04.333 14619.569 - 14720.394: 4.7531% ( 32) 00:08:04.333 14720.394 - 14821.218: 5.2131% ( 30) 00:08:04.333 14821.218 - 14922.043: 5.6578% ( 29) 00:08:04.333 14922.043 - 15022.868: 6.1024% ( 29) 00:08:04.333 15022.868 - 15123.692: 6.7617% ( 43) 00:08:04.333 15123.692 - 15224.517: 7.7737% ( 66) 00:08:04.333 15224.517 - 15325.342: 9.0156% ( 81) 00:08:04.333 15325.342 - 15426.166: 10.1809% ( 76) 00:08:04.333 15426.166 - 15526.991: 11.4842% ( 85) 00:08:04.333 15526.991 - 15627.815: 13.5695% ( 136) 00:08:04.333 15627.815 - 15728.640: 15.5014% ( 126) 00:08:04.333 15728.640 - 15829.465: 17.3720% ( 122) 00:08:04.333 15829.465 - 15930.289: 19.1966% ( 119) 00:08:04.333 15930.289 - 16031.114: 20.9598% ( 115) 00:08:04.333 16031.114 - 16131.938: 22.1711% ( 79) 00:08:04.333 16131.938 - 16232.763: 23.4744% ( 85) 00:08:04.333 16232.763 - 16333.588: 24.7777% ( 85) 00:08:04.333 16333.588 - 16434.412: 26.5103% ( 113) 00:08:04.333 16434.412 - 16535.237: 28.1355% ( 106) 00:08:04.333 16535.237 - 16636.062: 30.0521% ( 125) 00:08:04.333 16636.062 - 16736.886: 31.9687% ( 125) 00:08:04.333 16736.886 - 16837.711: 34.8666% ( 189) 00:08:04.333 16837.711 - 16938.535: 36.9212% ( 134) 00:08:04.333 16938.535 - 17039.360: 38.7458% ( 119) 00:08:04.333 17039.360 - 17140.185: 40.4477% ( 111) 00:08:04.333 17140.185 - 17241.009: 42.1036% ( 108) 00:08:04.333 17241.009 - 17341.834: 43.8362% ( 113) 00:08:04.333 17341.834 - 17442.658: 45.2775% ( 94) 00:08:04.333 17442.658 - 17543.483: 47.1941% ( 125) 00:08:04.333 17543.483 - 17644.308: 48.5741% ( 90) 00:08:04.333 17644.308 - 17745.132: 50.0767% ( 98) 00:08:04.333 17745.132 - 17845.957: 51.5179% ( 94) 00:08:04.333 17845.957 - 17946.782: 53.2199% ( 111) 00:08:04.333 17946.782 - 18047.606: 54.9371% ( 112) 00:08:04.333 18047.606 - 18148.431: 56.6084% ( 109) 00:08:04.333 18148.431 - 18249.255: 58.7550% ( 140) 00:08:04.333 18249.255 - 18350.080: 60.9782% ( 145) 00:08:04.333 18350.080 - 18450.905: 62.8488% ( 122) 00:08:04.333 18450.905 - 18551.729: 64.5354% ( 110) 00:08:04.333 18551.729 - 18652.554: 66.5133% ( 129) 00:08:04.333 18652.554 - 18753.378: 68.2153% ( 111) 00:08:04.333 18753.378 - 18854.203: 70.0092% ( 117) 00:08:04.333 18854.203 - 18955.028: 71.8798% ( 122) 00:08:04.333 18955.028 - 19055.852: 73.9804% ( 137) 00:08:04.333 19055.852 - 19156.677: 75.7590% ( 116) 00:08:04.333 19156.677 - 19257.502: 77.3382% ( 103) 00:08:04.333 19257.502 - 19358.326: 78.6875% ( 88) 00:08:04.333 19358.326 - 19459.151: 79.6688% ( 64) 00:08:04.333 19459.151 - 19559.975: 80.5734% ( 59) 00:08:04.333 19559.975 - 19660.800: 81.2328% ( 43) 00:08:04.333 19660.800 - 19761.625: 81.8307% ( 39) 00:08:04.333 19761.625 - 19862.449: 82.3674% ( 35) 00:08:04.333 19862.449 - 19963.274: 83.0420% ( 44) 00:08:04.333 19963.274 - 20064.098: 83.7473% ( 46) 00:08:04.333 20064.098 - 20164.923: 84.4680% ( 47) 00:08:04.333 20164.923 - 20265.748: 85.3726% ( 59) 00:08:04.333 20265.748 - 20366.572: 86.1392% ( 50) 00:08:04.333 20366.572 - 20467.397: 86.8905% ( 49) 00:08:04.333 20467.397 - 20568.222: 87.6878% ( 52) 00:08:04.333 20568.222 - 20669.046: 88.4391% ( 49) 00:08:04.333 20669.046 - 20769.871: 88.8378% ( 26) 00:08:04.333 20769.871 - 20870.695: 89.1138% ( 18) 00:08:04.333 20870.695 - 20971.520: 89.3591% ( 16) 00:08:04.333 20971.520 - 21072.345: 89.8804% ( 34) 00:08:04.333 21072.345 - 21173.169: 90.7697% ( 58) 00:08:04.333 21173.169 - 21273.994: 91.1224% ( 23) 00:08:04.333 21273.994 - 21374.818: 91.5363% ( 27) 00:08:04.333 21374.818 - 21475.643: 91.9350% ( 26) 00:08:04.333 21475.643 - 21576.468: 92.1956% ( 17) 00:08:04.333 21576.468 - 21677.292: 92.4716% ( 18) 00:08:04.333 21677.292 - 21778.117: 92.9163% ( 29) 00:08:04.333 21778.117 - 21878.942: 93.4069% ( 32) 00:08:04.333 21878.942 - 21979.766: 93.7902% ( 25) 00:08:04.333 21979.766 - 22080.591: 94.0969% ( 20) 00:08:04.333 22080.591 - 22181.415: 94.4342% ( 22) 00:08:04.333 22181.415 - 22282.240: 94.5875% ( 10) 00:08:04.333 22282.240 - 22383.065: 94.6949% ( 7) 00:08:04.333 22383.065 - 22483.889: 94.7562% ( 4) 00:08:04.333 22483.889 - 22584.714: 94.8022% ( 3) 00:08:04.333 22584.714 - 22685.538: 94.8175% ( 1) 00:08:04.333 22988.012 - 23088.837: 94.9095% ( 6) 00:08:04.333 25105.329 - 25206.154: 94.9555% ( 3) 00:08:04.333 25206.154 - 25306.978: 94.9709% ( 1) 00:08:04.333 25609.452 - 25710.277: 94.9862% ( 1) 00:08:04.333 25710.277 - 25811.102: 95.0322% ( 3) 00:08:04.333 25811.102 - 26012.751: 95.0935% ( 4) 00:08:04.333 26617.698 - 26819.348: 95.1702% ( 5) 00:08:04.333 27625.945 - 27827.594: 95.2315% ( 4) 00:08:04.333 27827.594 - 28029.243: 95.7682% ( 35) 00:08:04.333 28029.243 - 28230.892: 95.8908% ( 8) 00:08:04.333 29844.086 - 30045.735: 95.9062% ( 1) 00:08:04.333 30449.034 - 30650.683: 95.9368% ( 2) 00:08:04.333 30650.683 - 30852.332: 96.0288% ( 6) 00:08:04.333 30852.332 - 31053.982: 96.1822% ( 10) 00:08:04.333 31053.982 - 31255.631: 96.4275% ( 16) 00:08:04.333 31255.631 - 31457.280: 96.5041% ( 5) 00:08:04.333 31457.280 - 31658.929: 96.5808% ( 5) 00:08:04.333 31658.929 - 31860.578: 96.6575% ( 5) 00:08:04.333 31860.578 - 32062.228: 96.7341% ( 5) 00:08:04.333 32062.228 - 32263.877: 96.8108% ( 5) 00:08:04.333 32263.877 - 32465.526: 96.8721% ( 4) 00:08:04.333 32465.526 - 32667.175: 96.9488% ( 5) 00:08:04.333 32667.175 - 32868.825: 97.0408% ( 6) 00:08:04.333 32868.825 - 33070.474: 97.0561% ( 1) 00:08:04.333 39523.249 - 39724.898: 97.0715% ( 1) 00:08:04.333 39724.898 - 39926.548: 97.2401% ( 11) 00:08:04.333 39926.548 - 40128.197: 97.4088% ( 11) 00:08:04.333 40128.197 - 40329.846: 97.4241% ( 1) 00:08:04.333 40531.495 - 40733.145: 97.4394% ( 1) 00:08:04.333 41338.092 - 41539.742: 97.5161% ( 5) 00:08:04.333 41539.742 - 41741.391: 97.5928% ( 5) 00:08:04.333 41741.391 - 41943.040: 97.6848% ( 6) 00:08:04.333 41943.040 - 42144.689: 97.7768% ( 6) 00:08:04.333 42144.689 - 42346.338: 97.8688% ( 6) 00:08:04.333 42346.338 - 42547.988: 97.9607% ( 6) 00:08:04.333 42547.988 - 42749.637: 98.0374% ( 5) 00:08:04.333 84692.677 - 85095.975: 98.1294% ( 6) 00:08:04.333 93161.945 - 93565.243: 98.2674% ( 9) 00:08:04.333 93565.243 - 93968.542: 98.6507% ( 25) 00:08:04.333 93968.542 - 94371.840: 98.7427% ( 6) 00:08:04.333 98404.825 - 98808.123: 98.8654% ( 8) 00:08:04.333 98808.123 - 99211.422: 99.1107% ( 16) 00:08:04.333 101227.914 - 101631.212: 99.4787% ( 24) 00:08:04.334 107277.391 - 108083.988: 100.0000% ( 34) 00:08:04.334 00:08:04.334 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:04.334 ============================================================================== 00:08:04.334 Range in us Cumulative IO count 00:08:04.334 9880.812 - 9931.225: 0.0303% ( 2) 00:08:04.334 9931.225 - 9981.637: 0.0607% ( 2) 00:08:04.334 9981.637 - 10032.049: 0.0758% ( 1) 00:08:04.334 11746.068 - 11796.480: 0.1062% ( 2) 00:08:04.334 11796.480 - 11846.892: 0.1517% ( 3) 00:08:04.334 11846.892 - 11897.305: 0.1972% ( 3) 00:08:04.334 11897.305 - 11947.717: 0.2882% ( 6) 00:08:04.334 11947.717 - 11998.129: 0.3489% ( 4) 00:08:04.334 11998.129 - 12048.542: 0.4248% ( 5) 00:08:04.334 12048.542 - 12098.954: 0.5309% ( 7) 00:08:04.334 12098.954 - 12149.366: 0.6523% ( 8) 00:08:04.334 12149.366 - 12199.778: 0.7585% ( 7) 00:08:04.334 12199.778 - 12250.191: 0.8495% ( 6) 00:08:04.334 12250.191 - 12300.603: 1.0012% ( 10) 00:08:04.334 12300.603 - 12351.015: 1.1833% ( 12) 00:08:04.334 12351.015 - 12401.428: 1.2743% ( 6) 00:08:04.334 12401.428 - 12451.840: 1.3805% ( 7) 00:08:04.334 12451.840 - 12502.252: 1.6232% ( 16) 00:08:04.334 12502.252 - 12552.665: 1.7597% ( 9) 00:08:04.334 12552.665 - 12603.077: 1.9266% ( 11) 00:08:04.334 12603.077 - 12653.489: 2.2148% ( 19) 00:08:04.334 12653.489 - 12703.902: 2.3665% ( 10) 00:08:04.334 12703.902 - 12754.314: 2.4575% ( 6) 00:08:04.334 12754.314 - 12804.726: 2.5637% ( 7) 00:08:04.334 12804.726 - 12855.138: 2.7002% ( 9) 00:08:04.334 12855.138 - 12905.551: 2.8671% ( 11) 00:08:04.334 12905.551 - 13006.375: 2.9733% ( 7) 00:08:04.334 13006.375 - 13107.200: 3.0340% ( 4) 00:08:04.334 13107.200 - 13208.025: 3.1098% ( 5) 00:08:04.334 13208.025 - 13308.849: 3.1857% ( 5) 00:08:04.334 13308.849 - 13409.674: 3.2615% ( 5) 00:08:04.334 13409.674 - 13510.498: 3.3374% ( 5) 00:08:04.334 13510.498 - 13611.323: 3.3981% ( 4) 00:08:04.334 14317.095 - 14417.920: 3.4891% ( 6) 00:08:04.334 14417.920 - 14518.745: 3.5801% ( 6) 00:08:04.334 14518.745 - 14619.569: 3.7166% ( 9) 00:08:04.334 14619.569 - 14720.394: 3.8532% ( 9) 00:08:04.334 14720.394 - 14821.218: 4.2021% ( 23) 00:08:04.334 14821.218 - 14922.043: 4.5661% ( 24) 00:08:04.334 14922.043 - 15022.868: 4.9909% ( 28) 00:08:04.334 15022.868 - 15123.692: 5.6129% ( 41) 00:08:04.334 15123.692 - 15224.517: 6.4017% ( 52) 00:08:04.334 15224.517 - 15325.342: 7.3726% ( 64) 00:08:04.334 15325.342 - 15426.166: 8.6772% ( 86) 00:08:04.334 15426.166 - 15526.991: 10.2549% ( 104) 00:08:04.334 15526.991 - 15627.815: 11.9842% ( 114) 00:08:04.334 15627.815 - 15728.640: 13.7591% ( 117) 00:08:04.334 15728.640 - 15829.465: 15.2761% ( 100) 00:08:04.334 15829.465 - 15930.289: 17.2633% ( 131) 00:08:04.334 15930.289 - 16031.114: 19.4478% ( 144) 00:08:04.334 16031.114 - 16131.938: 21.6475% ( 145) 00:08:04.334 16131.938 - 16232.763: 23.8167% ( 143) 00:08:04.334 16232.763 - 16333.588: 26.1833% ( 156) 00:08:04.334 16333.588 - 16434.412: 28.0036% ( 120) 00:08:04.334 16434.412 - 16535.237: 30.7191% ( 179) 00:08:04.334 16535.237 - 16636.062: 32.7518% ( 134) 00:08:04.334 16636.062 - 16736.886: 34.8908% ( 141) 00:08:04.334 16736.886 - 16837.711: 36.9842% ( 138) 00:08:04.334 16837.711 - 16938.535: 38.5012% ( 100) 00:08:04.334 16938.535 - 17039.360: 39.9120% ( 93) 00:08:04.334 17039.360 - 17140.185: 41.4290% ( 100) 00:08:04.334 17140.185 - 17241.009: 42.8095% ( 91) 00:08:04.334 17241.009 - 17341.834: 44.5237% ( 113) 00:08:04.334 17341.834 - 17442.658: 46.4047% ( 124) 00:08:04.334 17442.658 - 17543.483: 48.2555% ( 122) 00:08:04.334 17543.483 - 17644.308: 50.6220% ( 156) 00:08:04.334 17644.308 - 17745.132: 52.2148% ( 105) 00:08:04.334 17745.132 - 17845.957: 53.6559% ( 95) 00:08:04.334 17845.957 - 17946.782: 54.9757% ( 87) 00:08:04.334 17946.782 - 18047.606: 56.4472% ( 97) 00:08:04.334 18047.606 - 18148.431: 57.7215% ( 84) 00:08:04.334 18148.431 - 18249.255: 58.9502% ( 81) 00:08:04.334 18249.255 - 18350.080: 60.4369% ( 98) 00:08:04.334 18350.080 - 18450.905: 62.1511% ( 113) 00:08:04.334 18450.905 - 18551.729: 63.7288% ( 104) 00:08:04.334 18551.729 - 18652.554: 65.1851% ( 96) 00:08:04.334 18652.554 - 18753.378: 67.1268% ( 128) 00:08:04.334 18753.378 - 18854.203: 68.9624% ( 121) 00:08:04.334 18854.203 - 18955.028: 70.6766% ( 113) 00:08:04.334 18955.028 - 19055.852: 72.2694% ( 105) 00:08:04.334 19055.852 - 19156.677: 73.6195% ( 89) 00:08:04.334 19156.677 - 19257.502: 74.9090% ( 85) 00:08:04.334 19257.502 - 19358.326: 76.1377% ( 81) 00:08:04.334 19358.326 - 19459.151: 77.1541% ( 67) 00:08:04.334 19459.151 - 19559.975: 78.1553% ( 66) 00:08:04.334 19559.975 - 19660.800: 79.3841% ( 81) 00:08:04.334 19660.800 - 19761.625: 80.5825% ( 79) 00:08:04.334 19761.625 - 19862.449: 81.6748% ( 72) 00:08:04.334 19862.449 - 19963.274: 82.6456% ( 64) 00:08:04.334 19963.274 - 20064.098: 83.6165% ( 64) 00:08:04.334 20064.098 - 20164.923: 84.5115% ( 59) 00:08:04.334 20164.923 - 20265.748: 85.4672% ( 63) 00:08:04.334 20265.748 - 20366.572: 86.3926% ( 61) 00:08:04.334 20366.572 - 20467.397: 87.1663% ( 51) 00:08:04.334 20467.397 - 20568.222: 88.0006% ( 55) 00:08:04.334 20568.222 - 20669.046: 88.8653% ( 57) 00:08:04.334 20669.046 - 20769.871: 89.4721% ( 40) 00:08:04.334 20769.871 - 20870.695: 89.9879% ( 34) 00:08:04.334 20870.695 - 20971.520: 90.4126% ( 28) 00:08:04.334 20971.520 - 21072.345: 90.6402% ( 15) 00:08:04.334 21072.345 - 21173.169: 90.9891% ( 23) 00:08:04.334 21173.169 - 21273.994: 91.3076% ( 21) 00:08:04.334 21273.994 - 21374.818: 91.6566% ( 23) 00:08:04.334 21374.818 - 21475.643: 91.9903% ( 22) 00:08:04.334 21475.643 - 21576.468: 92.2330% ( 16) 00:08:04.334 21576.468 - 21677.292: 92.6729% ( 29) 00:08:04.334 21677.292 - 21778.117: 93.0674% ( 26) 00:08:04.334 21778.117 - 21878.942: 93.7045% ( 42) 00:08:04.334 21878.942 - 21979.766: 94.0686% ( 24) 00:08:04.334 21979.766 - 22080.591: 94.4175% ( 23) 00:08:04.334 22080.591 - 22181.415: 94.5995% ( 12) 00:08:04.334 22181.415 - 22282.240: 94.8726% ( 18) 00:08:04.334 22282.240 - 22383.065: 95.1760% ( 20) 00:08:04.334 22383.065 - 22483.889: 95.5097% ( 22) 00:08:04.334 22483.889 - 22584.714: 95.6159% ( 7) 00:08:04.334 22584.714 - 22685.538: 95.6311% ( 1) 00:08:04.334 25407.803 - 25508.628: 95.6614% ( 2) 00:08:04.334 25609.452 - 25710.277: 95.6766% ( 1) 00:08:04.334 25710.277 - 25811.102: 95.7373% ( 4) 00:08:04.334 25811.102 - 26012.751: 95.8434% ( 7) 00:08:04.334 26012.751 - 26214.400: 95.9345% ( 6) 00:08:04.334 26214.400 - 26416.049: 96.0103% ( 5) 00:08:04.334 26416.049 - 26617.698: 96.0407% ( 2) 00:08:04.334 29440.788 - 29642.437: 96.1165% ( 5) 00:08:04.334 29642.437 - 29844.086: 96.2682% ( 10) 00:08:04.334 29844.086 - 30045.735: 96.4047% ( 9) 00:08:04.334 30045.735 - 30247.385: 96.4958% ( 6) 00:08:04.334 30247.385 - 30449.034: 96.5716% ( 5) 00:08:04.334 30449.034 - 30650.683: 96.6475% ( 5) 00:08:04.334 30650.683 - 30852.332: 96.7233% ( 5) 00:08:04.334 30852.332 - 31053.982: 96.7992% ( 5) 00:08:04.334 31053.982 - 31255.631: 96.8902% ( 6) 00:08:04.334 31255.631 - 31457.280: 96.9812% ( 6) 00:08:04.334 31457.280 - 31658.929: 97.0115% ( 2) 00:08:04.334 32263.877 - 32465.526: 97.0874% ( 5) 00:08:04.334 37910.055 - 38111.705: 97.1936% ( 7) 00:08:04.334 38111.705 - 38313.354: 97.2846% ( 6) 00:08:04.334 38313.354 - 38515.003: 97.4059% ( 8) 00:08:04.334 38515.003 - 38716.652: 97.6638% ( 17) 00:08:04.334 38716.652 - 38918.302: 97.7245% ( 4) 00:08:04.334 39119.951 - 39321.600: 97.7397% ( 1) 00:08:04.334 39724.898 - 39926.548: 97.7700% ( 2) 00:08:04.334 39926.548 - 40128.197: 97.8307% ( 4) 00:08:04.334 40128.197 - 40329.846: 97.9066% ( 5) 00:08:04.334 40329.846 - 40531.495: 97.9976% ( 6) 00:08:04.334 40531.495 - 40733.145: 98.0583% ( 4) 00:08:04.334 86305.871 - 86709.169: 98.9078% ( 56) 00:08:04.334 86709.169 - 87112.468: 99.0291% ( 8) 00:08:04.334 93565.243 - 93968.542: 99.1353% ( 7) 00:08:04.334 93968.542 - 94371.840: 99.5449% ( 27) 00:08:04.335 94371.840 - 94775.138: 99.9697% ( 28) 00:08:04.335 94775.138 - 95178.437: 100.0000% ( 2) 00:08:04.335 00:08:04.335 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:04.335 ============================================================================== 00:08:04.335 Range in us Cumulative IO count 00:08:04.335 11141.120 - 11191.532: 0.0152% ( 1) 00:08:04.335 11191.532 - 11241.945: 0.0758% ( 4) 00:08:04.335 11241.945 - 11292.357: 0.1062% ( 2) 00:08:04.335 11292.357 - 11342.769: 0.1365% ( 2) 00:08:04.335 11342.769 - 11393.182: 0.1972% ( 4) 00:08:04.335 11393.182 - 11443.594: 0.2275% ( 2) 00:08:04.335 11443.594 - 11494.006: 0.4248% ( 13) 00:08:04.335 11494.006 - 11544.418: 0.6371% ( 14) 00:08:04.335 11544.418 - 11594.831: 0.7130% ( 5) 00:08:04.335 11594.831 - 11645.243: 0.7585% ( 3) 00:08:04.335 11645.243 - 11695.655: 0.8192% ( 4) 00:08:04.335 11695.655 - 11746.068: 0.9102% ( 6) 00:08:04.335 11746.068 - 11796.480: 0.9557% ( 3) 00:08:04.335 11796.480 - 11846.892: 1.0316% ( 5) 00:08:04.335 11846.892 - 11897.305: 1.0771% ( 3) 00:08:04.335 11897.305 - 11947.717: 1.1529% ( 5) 00:08:04.335 11947.717 - 11998.129: 1.1984% ( 3) 00:08:04.335 11998.129 - 12048.542: 1.2743% ( 5) 00:08:04.335 12048.542 - 12098.954: 1.3805% ( 7) 00:08:04.335 12098.954 - 12149.366: 1.4867% ( 7) 00:08:04.335 12149.366 - 12199.778: 1.7294% ( 16) 00:08:04.335 12199.778 - 12250.191: 1.7749% ( 3) 00:08:04.335 12250.191 - 12300.603: 1.8659% ( 6) 00:08:04.335 12300.603 - 12351.015: 1.9417% ( 5) 00:08:04.335 12351.015 - 12401.428: 2.0176% ( 5) 00:08:04.335 12401.428 - 12451.840: 2.0631% ( 3) 00:08:04.335 12451.840 - 12502.252: 2.1693% ( 7) 00:08:04.335 12502.252 - 12552.665: 2.2603% ( 6) 00:08:04.335 12552.665 - 12603.077: 2.5334% ( 18) 00:08:04.335 12603.077 - 12653.489: 2.6092% ( 5) 00:08:04.335 12653.489 - 12703.902: 2.6396% ( 2) 00:08:04.335 12703.902 - 12754.314: 2.6699% ( 2) 00:08:04.335 12754.314 - 12804.726: 2.7154% ( 3) 00:08:04.335 12804.726 - 12855.138: 2.7458% ( 2) 00:08:04.335 12855.138 - 12905.551: 2.7761% ( 2) 00:08:04.335 12905.551 - 13006.375: 2.8368% ( 4) 00:08:04.335 13006.375 - 13107.200: 2.8823% ( 3) 00:08:04.335 13107.200 - 13208.025: 2.9126% ( 2) 00:08:04.335 14216.271 - 14317.095: 2.9430% ( 2) 00:08:04.335 14317.095 - 14417.920: 2.9733% ( 2) 00:08:04.335 14417.920 - 14518.745: 3.2312% ( 17) 00:08:04.335 14518.745 - 14619.569: 3.5346% ( 20) 00:08:04.335 14619.569 - 14720.394: 3.7925% ( 17) 00:08:04.335 14720.394 - 14821.218: 4.2627% ( 31) 00:08:04.335 14821.218 - 14922.043: 4.7633% ( 33) 00:08:04.335 14922.043 - 15022.868: 5.4763% ( 47) 00:08:04.335 15022.868 - 15123.692: 6.0528% ( 38) 00:08:04.335 15123.692 - 15224.517: 6.8113% ( 50) 00:08:04.335 15224.517 - 15325.342: 7.5394% ( 48) 00:08:04.335 15325.342 - 15426.166: 8.8744% ( 88) 00:08:04.335 15426.166 - 15526.991: 10.2245% ( 89) 00:08:04.335 15526.991 - 15627.815: 11.8174% ( 105) 00:08:04.335 15627.815 - 15728.640: 13.3192% ( 99) 00:08:04.335 15728.640 - 15829.465: 14.8362% ( 100) 00:08:04.335 15829.465 - 15930.289: 16.1711% ( 88) 00:08:04.335 15930.289 - 16031.114: 18.1129% ( 128) 00:08:04.335 16031.114 - 16131.938: 20.5704% ( 162) 00:08:04.335 16131.938 - 16232.763: 23.0279% ( 162) 00:08:04.335 16232.763 - 16333.588: 24.8635% ( 121) 00:08:04.335 16333.588 - 16434.412: 27.2300% ( 156) 00:08:04.335 16434.412 - 16535.237: 29.9454% ( 179) 00:08:04.335 16535.237 - 16636.062: 31.9933% ( 135) 00:08:04.335 16636.062 - 16736.886: 33.9502% ( 129) 00:08:04.335 16736.886 - 16837.711: 36.1650% ( 146) 00:08:04.335 16837.711 - 16938.535: 37.9551% ( 118) 00:08:04.335 16938.535 - 17039.360: 40.2306% ( 150) 00:08:04.335 17039.360 - 17140.185: 42.6123% ( 157) 00:08:04.335 17140.185 - 17241.009: 44.2051% ( 105) 00:08:04.335 17241.009 - 17341.834: 45.5097% ( 86) 00:08:04.335 17341.834 - 17442.658: 46.9053% ( 92) 00:08:04.335 17442.658 - 17543.483: 48.3161% ( 93) 00:08:04.335 17543.483 - 17644.308: 49.5449% ( 81) 00:08:04.335 17644.308 - 17745.132: 51.2591% ( 113) 00:08:04.335 17745.132 - 17845.957: 52.8671% ( 106) 00:08:04.335 17845.957 - 17946.782: 55.2336% ( 156) 00:08:04.335 17946.782 - 18047.606: 56.7809% ( 102) 00:08:04.335 18047.606 - 18148.431: 58.0704% ( 85) 00:08:04.335 18148.431 - 18249.255: 59.3750% ( 86) 00:08:04.335 18249.255 - 18350.080: 61.1499% ( 117) 00:08:04.335 18350.080 - 18450.905: 62.7275% ( 104) 00:08:04.335 18450.905 - 18551.729: 63.9411% ( 80) 00:08:04.335 18551.729 - 18652.554: 65.3368% ( 92) 00:08:04.335 18652.554 - 18753.378: 66.6717% ( 88) 00:08:04.335 18753.378 - 18854.203: 68.0370% ( 90) 00:08:04.335 18854.203 - 18955.028: 69.0231% ( 65) 00:08:04.335 18955.028 - 19055.852: 70.2063% ( 78) 00:08:04.335 19055.852 - 19156.677: 71.9964% ( 118) 00:08:04.335 19156.677 - 19257.502: 73.4678% ( 97) 00:08:04.335 19257.502 - 19358.326: 74.7118% ( 82) 00:08:04.335 19358.326 - 19459.151: 76.0922% ( 91) 00:08:04.335 19459.151 - 19559.975: 77.6547% ( 103) 00:08:04.335 19559.975 - 19660.800: 79.0504% ( 92) 00:08:04.335 19660.800 - 19761.625: 80.1881% ( 75) 00:08:04.335 19761.625 - 19862.449: 81.2652% ( 71) 00:08:04.335 19862.449 - 19963.274: 82.2816% ( 67) 00:08:04.335 19963.274 - 20064.098: 83.2221% ( 62) 00:08:04.335 20064.098 - 20164.923: 84.0716% ( 56) 00:08:04.335 20164.923 - 20265.748: 84.9970% ( 61) 00:08:04.335 20265.748 - 20366.572: 85.8920% ( 59) 00:08:04.335 20366.572 - 20467.397: 87.1208% ( 81) 00:08:04.335 20467.397 - 20568.222: 88.2130% ( 72) 00:08:04.335 20568.222 - 20669.046: 88.9715% ( 50) 00:08:04.335 20669.046 - 20769.871: 89.7300% ( 50) 00:08:04.335 20769.871 - 20870.695: 90.6098% ( 58) 00:08:04.335 20870.695 - 20971.520: 91.3532% ( 49) 00:08:04.335 20971.520 - 21072.345: 91.8386% ( 32) 00:08:04.335 21072.345 - 21173.169: 92.0965% ( 17) 00:08:04.335 21173.169 - 21273.994: 92.3695% ( 18) 00:08:04.335 21273.994 - 21374.818: 92.5667% ( 13) 00:08:04.335 21374.818 - 21475.643: 92.7184% ( 10) 00:08:04.335 21475.643 - 21576.468: 92.8398% ( 8) 00:08:04.335 21576.468 - 21677.292: 92.9460% ( 7) 00:08:04.335 21677.292 - 21778.117: 93.1280% ( 12) 00:08:04.335 21778.117 - 21878.942: 93.2797% ( 10) 00:08:04.335 21878.942 - 21979.766: 93.4011% ( 8) 00:08:04.335 21979.766 - 22080.591: 93.5073% ( 7) 00:08:04.335 22080.591 - 22181.415: 93.7500% ( 16) 00:08:04.335 22181.415 - 22282.240: 93.8562% ( 7) 00:08:04.335 22282.240 - 22383.065: 93.9017% ( 3) 00:08:04.335 22383.065 - 22483.889: 93.9775% ( 5) 00:08:04.335 22483.889 - 22584.714: 94.0534% ( 5) 00:08:04.335 22584.714 - 22685.538: 94.1292% ( 5) 00:08:04.335 22685.538 - 22786.363: 94.1748% ( 3) 00:08:04.335 22786.363 - 22887.188: 94.2506% ( 5) 00:08:04.335 22887.188 - 22988.012: 94.9333% ( 45) 00:08:04.335 22988.012 - 23088.837: 95.0698% ( 9) 00:08:04.335 23088.837 - 23189.662: 95.2215% ( 10) 00:08:04.335 23189.662 - 23290.486: 95.6917% ( 31) 00:08:04.335 23290.486 - 23391.311: 95.7979% ( 7) 00:08:04.335 23391.311 - 23492.135: 95.8890% ( 6) 00:08:04.335 23492.135 - 23592.960: 95.9800% ( 6) 00:08:04.335 23592.960 - 23693.785: 96.1013% ( 8) 00:08:04.335 23693.785 - 23794.609: 96.1165% ( 1) 00:08:04.335 26617.698 - 26819.348: 96.1772% ( 4) 00:08:04.335 26819.348 - 27020.997: 96.2985% ( 8) 00:08:04.335 27020.997 - 27222.646: 96.4047% ( 7) 00:08:04.335 27222.646 - 27424.295: 96.5109% ( 7) 00:08:04.335 27424.295 - 27625.945: 96.6019% ( 6) 00:08:04.335 27625.945 - 27827.594: 96.6778% ( 5) 00:08:04.335 27827.594 - 28029.243: 96.7688% ( 6) 00:08:04.335 28029.243 - 28230.892: 96.8598% ( 6) 00:08:04.335 28230.892 - 28432.542: 96.9508% ( 6) 00:08:04.335 28432.542 - 28634.191: 97.0570% ( 7) 00:08:04.335 28634.191 - 28835.840: 97.0874% ( 2) 00:08:04.335 36296.862 - 36498.511: 97.1177% ( 2) 00:08:04.335 36498.511 - 36700.160: 97.2087% ( 6) 00:08:04.335 37506.757 - 37708.406: 97.2239% ( 1) 00:08:04.335 37708.406 - 37910.055: 97.3149% ( 6) 00:08:04.335 37910.055 - 38111.705: 97.4211% ( 7) 00:08:04.335 38111.705 - 38313.354: 97.5273% ( 7) 00:08:04.335 38313.354 - 38515.003: 97.6183% ( 6) 00:08:04.335 38515.003 - 38716.652: 97.7093% ( 6) 00:08:04.335 38716.652 - 38918.302: 97.7700% ( 4) 00:08:04.335 38918.302 - 39119.951: 97.8914% ( 8) 00:08:04.335 39119.951 - 39321.600: 97.9976% ( 7) 00:08:04.335 39321.600 - 39523.249: 98.0583% ( 4) 00:08:04.335 84289.378 - 84692.677: 98.1189% ( 4) 00:08:04.335 84692.677 - 85095.975: 99.0291% ( 60) 00:08:04.335 94775.138 - 95178.437: 99.5297% ( 33) 00:08:04.335 95178.437 - 95581.735: 100.0000% ( 31) 00:08:04.335 00:08:04.335 16:34:48 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:04.335 00:08:04.335 real 0m2.607s 00:08:04.335 user 0m2.219s 00:08:04.335 sys 0m0.268s 00:08:04.335 ************************************ 00:08:04.335 END TEST nvme_perf 00:08:04.335 16:34:48 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.335 16:34:48 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:04.335 ************************************ 00:08:04.335 16:34:48 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:04.335 16:34:48 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:04.335 16:34:48 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.335 16:34:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:04.335 ************************************ 00:08:04.335 START TEST nvme_hello_world 00:08:04.335 ************************************ 00:08:04.335 16:34:48 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:04.335 Initializing NVMe Controllers 00:08:04.335 Attached to 0000:00:10.0 00:08:04.336 Namespace ID: 1 size: 6GB 00:08:04.336 Attached to 0000:00:11.0 00:08:04.336 Namespace ID: 1 size: 5GB 00:08:04.336 Attached to 0000:00:13.0 00:08:04.336 Namespace ID: 1 size: 1GB 00:08:04.336 Attached to 0000:00:12.0 00:08:04.336 Namespace ID: 1 size: 4GB 00:08:04.336 Namespace ID: 2 size: 4GB 00:08:04.336 Namespace ID: 3 size: 4GB 00:08:04.336 Initialization complete. 00:08:04.336 INFO: using host memory buffer for IO 00:08:04.336 Hello world! 00:08:04.336 INFO: using host memory buffer for IO 00:08:04.336 Hello world! 00:08:04.336 INFO: using host memory buffer for IO 00:08:04.336 Hello world! 00:08:04.336 INFO: using host memory buffer for IO 00:08:04.336 Hello world! 00:08:04.336 INFO: using host memory buffer for IO 00:08:04.336 Hello world! 00:08:04.336 INFO: using host memory buffer for IO 00:08:04.336 Hello world! 00:08:04.336 00:08:04.336 real 0m0.264s 00:08:04.336 user 0m0.107s 00:08:04.336 sys 0m0.099s 00:08:04.336 16:34:49 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.336 ************************************ 00:08:04.336 END TEST nvme_hello_world 00:08:04.336 ************************************ 00:08:04.336 16:34:49 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:04.336 16:34:49 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:04.336 16:34:49 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:04.336 16:34:49 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.336 16:34:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:04.596 ************************************ 00:08:04.596 START TEST nvme_sgl 00:08:04.596 ************************************ 00:08:04.596 16:34:49 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:04.596 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:04.596 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:04.596 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:04.858 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:04.858 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:04.858 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:04.858 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:04.858 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:04.858 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:04.858 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:04.858 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:04.858 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:04.858 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:04.858 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:04.858 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:04.858 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:04.858 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:04.858 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:04.858 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:04.858 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:04.858 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:04.858 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:04.858 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:04.858 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:04.858 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:04.858 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:04.858 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:04.858 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:04.858 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:04.858 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:04.858 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:04.858 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:04.858 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:04.858 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:04.858 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:04.858 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:04.858 NVMe Readv/Writev Request test 00:08:04.858 Attached to 0000:00:10.0 00:08:04.858 Attached to 0000:00:11.0 00:08:04.858 Attached to 0000:00:13.0 00:08:04.858 Attached to 0000:00:12.0 00:08:04.858 0000:00:10.0: build_io_request_2 test passed 00:08:04.858 0000:00:10.0: build_io_request_4 test passed 00:08:04.858 0000:00:10.0: build_io_request_5 test passed 00:08:04.858 0000:00:10.0: build_io_request_6 test passed 00:08:04.858 0000:00:10.0: build_io_request_7 test passed 00:08:04.858 0000:00:10.0: build_io_request_10 test passed 00:08:04.858 0000:00:11.0: build_io_request_2 test passed 00:08:04.858 0000:00:11.0: build_io_request_4 test passed 00:08:04.858 0000:00:11.0: build_io_request_5 test passed 00:08:04.858 0000:00:11.0: build_io_request_6 test passed 00:08:04.858 0000:00:11.0: build_io_request_7 test passed 00:08:04.858 0000:00:11.0: build_io_request_10 test passed 00:08:04.858 Cleaning up... 00:08:04.858 00:08:04.858 real 0m0.355s 00:08:04.858 user 0m0.172s 00:08:04.858 sys 0m0.134s 00:08:04.858 16:34:49 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.858 16:34:49 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:04.858 ************************************ 00:08:04.858 END TEST nvme_sgl 00:08:04.858 ************************************ 00:08:04.858 16:34:49 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:04.858 16:34:49 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:04.858 16:34:49 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.858 16:34:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:04.858 ************************************ 00:08:04.858 START TEST nvme_e2edp 00:08:04.858 ************************************ 00:08:04.858 16:34:49 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:05.119 NVMe Write/Read with End-to-End data protection test 00:08:05.119 Attached to 0000:00:10.0 00:08:05.119 Attached to 0000:00:11.0 00:08:05.119 Attached to 0000:00:13.0 00:08:05.119 Attached to 0000:00:12.0 00:08:05.119 Cleaning up... 00:08:05.119 ************************************ 00:08:05.119 END TEST nvme_e2edp 00:08:05.119 ************************************ 00:08:05.119 00:08:05.119 real 0m0.255s 00:08:05.119 user 0m0.082s 00:08:05.119 sys 0m0.114s 00:08:05.119 16:34:49 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.119 16:34:49 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:05.119 16:34:49 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:05.119 16:34:49 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:05.119 16:34:49 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.119 16:34:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:05.119 ************************************ 00:08:05.119 START TEST nvme_reserve 00:08:05.119 ************************************ 00:08:05.119 16:34:49 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:05.379 ===================================================== 00:08:05.379 NVMe Controller at PCI bus 0, device 16, function 0 00:08:05.379 ===================================================== 00:08:05.379 Reservations: Not Supported 00:08:05.379 ===================================================== 00:08:05.379 NVMe Controller at PCI bus 0, device 17, function 0 00:08:05.379 ===================================================== 00:08:05.379 Reservations: Not Supported 00:08:05.379 ===================================================== 00:08:05.379 NVMe Controller at PCI bus 0, device 19, function 0 00:08:05.379 ===================================================== 00:08:05.379 Reservations: Not Supported 00:08:05.379 ===================================================== 00:08:05.379 NVMe Controller at PCI bus 0, device 18, function 0 00:08:05.379 ===================================================== 00:08:05.379 Reservations: Not Supported 00:08:05.379 Reservation test passed 00:08:05.379 ************************************ 00:08:05.379 END TEST nvme_reserve 00:08:05.379 ************************************ 00:08:05.379 00:08:05.379 real 0m0.259s 00:08:05.379 user 0m0.078s 00:08:05.379 sys 0m0.125s 00:08:05.379 16:34:50 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.379 16:34:50 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:05.638 16:34:50 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:05.638 16:34:50 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:05.638 16:34:50 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.638 16:34:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:05.638 ************************************ 00:08:05.638 START TEST nvme_err_injection 00:08:05.638 ************************************ 00:08:05.638 16:34:50 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:05.899 NVMe Error Injection test 00:08:05.899 Attached to 0000:00:10.0 00:08:05.899 Attached to 0000:00:11.0 00:08:05.899 Attached to 0000:00:13.0 00:08:05.899 Attached to 0000:00:12.0 00:08:05.899 0000:00:11.0: get features failed as expected 00:08:05.899 0000:00:13.0: get features failed as expected 00:08:05.899 0000:00:12.0: get features failed as expected 00:08:05.899 0000:00:10.0: get features failed as expected 00:08:05.899 0000:00:13.0: get features successfully as expected 00:08:05.899 0000:00:12.0: get features successfully as expected 00:08:05.899 0000:00:10.0: get features successfully as expected 00:08:05.899 0000:00:11.0: get features successfully as expected 00:08:05.899 0000:00:12.0: read failed as expected 00:08:05.899 0000:00:10.0: read failed as expected 00:08:05.899 0000:00:11.0: read failed as expected 00:08:05.899 0000:00:13.0: read failed as expected 00:08:05.899 0000:00:12.0: read successfully as expected 00:08:05.899 0000:00:10.0: read successfully as expected 00:08:05.899 0000:00:11.0: read successfully as expected 00:08:05.899 0000:00:13.0: read successfully as expected 00:08:05.899 Cleaning up... 00:08:05.899 ************************************ 00:08:05.899 END TEST nvme_err_injection 00:08:05.899 ************************************ 00:08:05.899 00:08:05.899 real 0m0.257s 00:08:05.899 user 0m0.089s 00:08:05.899 sys 0m0.118s 00:08:05.899 16:34:50 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.899 16:34:50 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:05.899 16:34:50 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:05.899 16:34:50 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:08:05.899 16:34:50 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.899 16:34:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:05.899 ************************************ 00:08:05.899 START TEST nvme_overhead 00:08:05.899 ************************************ 00:08:05.899 16:34:50 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:07.284 Initializing NVMe Controllers 00:08:07.284 Attached to 0000:00:10.0 00:08:07.284 Attached to 0000:00:11.0 00:08:07.284 Attached to 0000:00:13.0 00:08:07.284 Attached to 0000:00:12.0 00:08:07.284 Initialization complete. Launching workers. 00:08:07.284 submit (in ns) avg, min, max = 15319.1, 11583.8, 147146.2 00:08:07.284 complete (in ns) avg, min, max = 9317.1, 8216.2, 184891.5 00:08:07.284 00:08:07.284 Submit histogram 00:08:07.284 ================ 00:08:07.284 Range in us Cumulative Count 00:08:07.284 11.569 - 11.618: 0.0409% ( 1) 00:08:07.284 12.258 - 12.308: 0.0818% ( 1) 00:08:07.284 12.505 - 12.554: 0.1226% ( 1) 00:08:07.284 12.554 - 12.603: 0.1635% ( 1) 00:08:07.284 12.800 - 12.898: 0.2453% ( 2) 00:08:07.284 13.095 - 13.194: 0.3271% ( 2) 00:08:07.284 13.194 - 13.292: 0.3679% ( 1) 00:08:07.284 13.292 - 13.391: 0.7768% ( 10) 00:08:07.284 13.391 - 13.489: 2.2077% ( 35) 00:08:07.284 13.489 - 13.588: 4.9469% ( 67) 00:08:07.284 13.588 - 13.686: 9.7711% ( 118) 00:08:07.284 13.686 - 13.785: 16.4350% ( 163) 00:08:07.284 13.785 - 13.883: 26.0016% ( 234) 00:08:07.284 13.883 - 13.982: 35.0777% ( 222) 00:08:07.284 13.982 - 14.080: 43.8675% ( 215) 00:08:07.284 14.080 - 14.178: 52.5756% ( 213) 00:08:07.284 14.178 - 14.277: 60.7522% ( 200) 00:08:07.284 14.277 - 14.375: 66.7212% ( 146) 00:08:07.284 14.375 - 14.474: 71.4227% ( 115) 00:08:07.284 14.474 - 14.572: 75.0204% ( 88) 00:08:07.284 14.572 - 14.671: 77.6370% ( 64) 00:08:07.284 14.671 - 14.769: 79.4767% ( 45) 00:08:07.284 14.769 - 14.868: 80.7850% ( 32) 00:08:07.284 14.868 - 14.966: 82.5429% ( 43) 00:08:07.284 14.966 - 15.065: 83.3606% ( 20) 00:08:07.284 15.065 - 15.163: 83.9330% ( 14) 00:08:07.284 15.163 - 15.262: 84.3827% ( 11) 00:08:07.284 15.262 - 15.360: 84.7915% ( 10) 00:08:07.284 15.360 - 15.458: 85.2412% ( 11) 00:08:07.284 15.458 - 15.557: 85.5683% ( 8) 00:08:07.284 15.557 - 15.655: 85.9362% ( 9) 00:08:07.284 15.655 - 15.754: 86.2633% ( 8) 00:08:07.284 15.754 - 15.852: 86.5086% ( 6) 00:08:07.284 15.852 - 15.951: 86.7539% ( 6) 00:08:07.284 15.951 - 16.049: 86.9174% ( 4) 00:08:07.284 16.049 - 16.148: 87.1627% ( 6) 00:08:07.284 16.246 - 16.345: 87.2854% ( 3) 00:08:07.284 16.345 - 16.443: 87.4080% ( 3) 00:08:07.284 16.443 - 16.542: 87.4898% ( 2) 00:08:07.284 16.542 - 16.640: 87.5307% ( 1) 00:08:07.284 16.640 - 16.738: 87.6124% ( 2) 00:08:07.284 16.738 - 16.837: 87.7760% ( 4) 00:08:07.284 16.837 - 16.935: 87.8986% ( 3) 00:08:07.284 16.935 - 17.034: 88.0621% ( 4) 00:08:07.284 17.034 - 17.132: 88.1848% ( 3) 00:08:07.284 17.132 - 17.231: 88.3483% ( 4) 00:08:07.284 17.231 - 17.329: 88.6754% ( 8) 00:08:07.284 17.329 - 17.428: 88.7572% ( 2) 00:08:07.284 17.428 - 17.526: 88.8389% ( 2) 00:08:07.284 17.526 - 17.625: 89.0433% ( 5) 00:08:07.284 17.625 - 17.723: 89.2069% ( 4) 00:08:07.284 17.723 - 17.822: 89.4113% ( 5) 00:08:07.284 17.822 - 17.920: 89.4930% ( 2) 00:08:07.284 17.920 - 18.018: 89.6566% ( 4) 00:08:07.284 18.018 - 18.117: 89.7792% ( 3) 00:08:07.284 18.117 - 18.215: 89.9836% ( 5) 00:08:07.284 18.215 - 18.314: 90.1881% ( 5) 00:08:07.284 18.314 - 18.412: 90.5151% ( 8) 00:08:07.284 18.412 - 18.511: 90.8013% ( 7) 00:08:07.284 18.511 - 18.609: 91.0057% ( 5) 00:08:07.284 18.609 - 18.708: 91.3328% ( 8) 00:08:07.284 18.708 - 18.806: 91.4554% ( 3) 00:08:07.284 18.806 - 18.905: 91.6599% ( 5) 00:08:07.284 18.905 - 19.003: 91.9460% ( 7) 00:08:07.284 19.003 - 19.102: 92.1913% ( 6) 00:08:07.284 19.102 - 19.200: 92.2731% ( 2) 00:08:07.284 19.200 - 19.298: 92.4366% ( 4) 00:08:07.285 19.298 - 19.397: 92.6410% ( 5) 00:08:07.285 19.397 - 19.495: 92.6819% ( 1) 00:08:07.285 19.495 - 19.594: 92.9272% ( 6) 00:08:07.285 19.594 - 19.692: 92.9681% ( 1) 00:08:07.285 19.692 - 19.791: 93.0908% ( 3) 00:08:07.285 19.791 - 19.889: 93.1725% ( 2) 00:08:07.285 19.889 - 19.988: 93.2952% ( 3) 00:08:07.285 19.988 - 20.086: 93.5814% ( 7) 00:08:07.285 20.086 - 20.185: 93.7040% ( 3) 00:08:07.285 20.185 - 20.283: 93.7858% ( 2) 00:08:07.285 20.283 - 20.382: 93.9084% ( 3) 00:08:07.285 20.382 - 20.480: 93.9902% ( 2) 00:08:07.285 20.480 - 20.578: 94.1128% ( 3) 00:08:07.285 20.578 - 20.677: 94.1946% ( 2) 00:08:07.285 20.677 - 20.775: 94.3990% ( 5) 00:08:07.285 20.775 - 20.874: 94.4808% ( 2) 00:08:07.285 20.874 - 20.972: 94.6034% ( 3) 00:08:07.285 20.972 - 21.071: 94.6443% ( 1) 00:08:07.285 21.071 - 21.169: 94.8896% ( 6) 00:08:07.285 21.169 - 21.268: 95.0123% ( 3) 00:08:07.285 21.268 - 21.366: 95.1758% ( 4) 00:08:07.285 21.366 - 21.465: 95.2984% ( 3) 00:08:07.285 21.465 - 21.563: 95.4620% ( 4) 00:08:07.285 21.563 - 21.662: 95.5437% ( 2) 00:08:07.285 21.662 - 21.760: 95.7482% ( 5) 00:08:07.285 21.760 - 21.858: 95.8708% ( 3) 00:08:07.285 21.858 - 21.957: 96.0343% ( 4) 00:08:07.285 21.957 - 22.055: 96.1570% ( 3) 00:08:07.285 22.055 - 22.154: 96.2388% ( 2) 00:08:07.285 22.154 - 22.252: 96.3614% ( 3) 00:08:07.285 22.351 - 22.449: 96.5658% ( 5) 00:08:07.285 22.449 - 22.548: 96.6476% ( 2) 00:08:07.285 22.548 - 22.646: 96.8111% ( 4) 00:08:07.285 22.646 - 22.745: 97.0155% ( 5) 00:08:07.285 22.745 - 22.843: 97.0973% ( 2) 00:08:07.285 22.843 - 22.942: 97.1791% ( 2) 00:08:07.285 22.942 - 23.040: 97.3426% ( 4) 00:08:07.285 23.040 - 23.138: 97.3835% ( 1) 00:08:07.285 23.335 - 23.434: 97.4244% ( 1) 00:08:07.285 23.434 - 23.532: 97.4652% ( 1) 00:08:07.285 23.631 - 23.729: 97.5061% ( 1) 00:08:07.285 23.729 - 23.828: 97.6697% ( 4) 00:08:07.285 23.828 - 23.926: 97.7105% ( 1) 00:08:07.285 24.025 - 24.123: 97.7923% ( 2) 00:08:07.285 24.123 - 24.222: 97.9150% ( 3) 00:08:07.285 24.222 - 24.320: 97.9967% ( 2) 00:08:07.285 24.418 - 24.517: 98.0376% ( 1) 00:08:07.285 24.517 - 24.615: 98.0785% ( 1) 00:08:07.285 24.615 - 24.714: 98.1194% ( 1) 00:08:07.285 24.911 - 25.009: 98.1603% ( 1) 00:08:07.285 25.403 - 25.600: 98.2011% ( 1) 00:08:07.285 26.191 - 26.388: 98.3238% ( 3) 00:08:07.285 26.388 - 26.585: 98.3647% ( 1) 00:08:07.285 26.585 - 26.782: 98.4056% ( 1) 00:08:07.285 26.782 - 26.978: 98.4464% ( 1) 00:08:07.285 27.175 - 27.372: 98.5282% ( 2) 00:08:07.285 27.766 - 27.963: 98.6100% ( 2) 00:08:07.285 27.963 - 28.160: 98.6509% ( 1) 00:08:07.285 28.160 - 28.357: 98.6917% ( 1) 00:08:07.285 28.357 - 28.554: 98.7326% ( 1) 00:08:07.285 28.751 - 28.948: 98.7735% ( 1) 00:08:07.285 29.145 - 29.342: 98.8144% ( 1) 00:08:07.285 29.342 - 29.538: 98.8553% ( 1) 00:08:07.285 29.538 - 29.735: 98.8962% ( 1) 00:08:07.285 29.735 - 29.932: 98.9370% ( 1) 00:08:07.285 30.523 - 30.720: 99.0188% ( 2) 00:08:07.285 31.705 - 31.902: 99.0597% ( 1) 00:08:07.285 32.098 - 32.295: 99.1006% ( 1) 00:08:07.285 33.280 - 33.477: 99.1415% ( 1) 00:08:07.285 34.068 - 34.265: 99.1823% ( 1) 00:08:07.285 36.431 - 36.628: 99.2232% ( 1) 00:08:07.285 38.400 - 38.597: 99.3050% ( 2) 00:08:07.285 40.960 - 41.157: 99.3868% ( 2) 00:08:07.285 49.034 - 49.231: 99.4276% ( 1) 00:08:07.285 51.594 - 51.988: 99.4685% ( 1) 00:08:07.285 52.775 - 53.169: 99.5094% ( 1) 00:08:07.285 56.714 - 57.108: 99.5503% ( 1) 00:08:07.285 59.471 - 59.865: 99.5912% ( 1) 00:08:07.285 62.622 - 63.015: 99.6321% ( 1) 00:08:07.285 64.197 - 64.591: 99.6729% ( 1) 00:08:07.285 65.378 - 65.772: 99.7138% ( 1) 00:08:07.285 66.954 - 67.348: 99.7956% ( 2) 00:08:07.285 67.348 - 67.742: 99.8365% ( 1) 00:08:07.285 76.406 - 76.800: 99.8774% ( 1) 00:08:07.285 76.800 - 77.194: 99.9182% ( 1) 00:08:07.285 93.735 - 94.129: 99.9591% ( 1) 00:08:07.285 146.511 - 147.298: 100.0000% ( 1) 00:08:07.285 00:08:07.285 Complete histogram 00:08:07.285 ================== 00:08:07.285 Range in us Cumulative Count 00:08:07.285 8.172 - 8.222: 0.0409% ( 1) 00:08:07.285 8.222 - 8.271: 0.2044% ( 4) 00:08:07.285 8.271 - 8.320: 1.0221% ( 20) 00:08:07.285 8.320 - 8.369: 3.2298% ( 54) 00:08:07.285 8.369 - 8.418: 6.7866% ( 87) 00:08:07.285 8.418 - 8.468: 13.0826% ( 154) 00:08:07.285 8.468 - 8.517: 19.8692% ( 166) 00:08:07.285 8.517 - 8.566: 27.4734% ( 186) 00:08:07.285 8.566 - 8.615: 34.8324% ( 180) 00:08:07.285 8.615 - 8.665: 41.8643% ( 172) 00:08:07.285 8.665 - 8.714: 47.6697% ( 142) 00:08:07.285 8.714 - 8.763: 52.5348% ( 119) 00:08:07.285 8.763 - 8.812: 56.9910% ( 109) 00:08:07.285 8.812 - 8.862: 61.5699% ( 112) 00:08:07.285 8.862 - 8.911: 65.8626% ( 105) 00:08:07.285 8.911 - 8.960: 69.1742% ( 81) 00:08:07.285 8.960 - 9.009: 72.8945% ( 91) 00:08:07.285 9.009 - 9.058: 75.6746% ( 68) 00:08:07.285 9.058 - 9.108: 77.9640% ( 56) 00:08:07.285 9.108 - 9.157: 79.5993% ( 40) 00:08:07.285 9.157 - 9.206: 81.7253% ( 52) 00:08:07.285 9.206 - 9.255: 82.6247% ( 22) 00:08:07.285 9.255 - 9.305: 83.8512% ( 30) 00:08:07.285 9.305 - 9.354: 85.1186% ( 31) 00:08:07.285 9.354 - 9.403: 86.2224% ( 27) 00:08:07.285 9.403 - 9.452: 86.8357% ( 15) 00:08:07.285 9.452 - 9.502: 87.5715% ( 18) 00:08:07.285 9.502 - 9.551: 88.7163% ( 28) 00:08:07.285 9.551 - 9.600: 89.4522% ( 18) 00:08:07.285 9.600 - 9.649: 89.9428% ( 12) 00:08:07.285 9.649 - 9.698: 90.4334% ( 12) 00:08:07.285 9.698 - 9.748: 91.0875% ( 16) 00:08:07.285 9.748 - 9.797: 91.4554% ( 9) 00:08:07.285 9.797 - 9.846: 92.3140% ( 21) 00:08:07.285 9.846 - 9.895: 92.7228% ( 10) 00:08:07.285 9.895 - 9.945: 93.2543% ( 13) 00:08:07.285 9.945 - 9.994: 93.4996% ( 6) 00:08:07.285 9.994 - 10.043: 94.0311% ( 13) 00:08:07.285 10.043 - 10.092: 94.1537% ( 3) 00:08:07.285 10.092 - 10.142: 94.4399% ( 7) 00:08:07.285 10.142 - 10.191: 94.5626% ( 3) 00:08:07.285 10.191 - 10.240: 94.7261% ( 4) 00:08:07.285 10.240 - 10.289: 94.9305% ( 5) 00:08:07.285 10.289 - 10.338: 95.2167% ( 7) 00:08:07.285 10.338 - 10.388: 95.4620% ( 6) 00:08:07.285 10.388 - 10.437: 95.7073% ( 6) 00:08:07.285 10.437 - 10.486: 95.9117% ( 5) 00:08:07.285 10.486 - 10.535: 96.2388% ( 8) 00:08:07.285 10.535 - 10.585: 96.4841% ( 6) 00:08:07.285 10.585 - 10.634: 96.6885% ( 5) 00:08:07.285 10.634 - 10.683: 96.8111% ( 3) 00:08:07.285 10.683 - 10.732: 96.9747% ( 4) 00:08:07.285 10.732 - 10.782: 97.0564% ( 2) 00:08:07.285 10.831 - 10.880: 97.2200% ( 4) 00:08:07.285 10.880 - 10.929: 97.3426% ( 3) 00:08:07.285 10.929 - 10.978: 97.3835% ( 1) 00:08:07.285 10.978 - 11.028: 97.5061% ( 3) 00:08:07.285 11.225 - 11.274: 97.5470% ( 1) 00:08:07.285 11.422 - 11.471: 97.6288% ( 2) 00:08:07.285 11.717 - 11.766: 97.6697% ( 1) 00:08:07.285 11.914 - 11.963: 97.7105% ( 1) 00:08:07.285 12.160 - 12.209: 97.7514% ( 1) 00:08:07.285 12.554 - 12.603: 97.7923% ( 1) 00:08:07.285 15.557 - 15.655: 97.8332% ( 1) 00:08:07.285 15.655 - 15.754: 97.8741% ( 1) 00:08:07.285 15.754 - 15.852: 97.9150% ( 1) 00:08:07.285 15.852 - 15.951: 97.9558% ( 1) 00:08:07.285 15.951 - 16.049: 97.9967% ( 1) 00:08:07.285 16.148 - 16.246: 98.1194% ( 3) 00:08:07.285 16.246 - 16.345: 98.2011% ( 2) 00:08:07.285 16.443 - 16.542: 98.2420% ( 1) 00:08:07.285 16.542 - 16.640: 98.2829% ( 1) 00:08:07.285 16.640 - 16.738: 98.4056% ( 3) 00:08:07.285 16.738 - 16.837: 98.4873% ( 2) 00:08:07.285 16.837 - 16.935: 98.6509% ( 4) 00:08:07.285 16.935 - 17.034: 98.7735% ( 3) 00:08:07.285 17.034 - 17.132: 98.8553% ( 2) 00:08:07.285 17.329 - 17.428: 98.8962% ( 1) 00:08:07.285 17.625 - 17.723: 98.9370% ( 1) 00:08:07.285 17.723 - 17.822: 99.0188% ( 2) 00:08:07.285 17.920 - 18.018: 99.0597% ( 1) 00:08:07.285 18.018 - 18.117: 99.1415% ( 2) 00:08:07.285 22.548 - 22.646: 99.1823% ( 1) 00:08:07.285 23.532 - 23.631: 99.2232% ( 1) 00:08:07.285 24.123 - 24.222: 99.2641% ( 1) 00:08:07.285 24.418 - 24.517: 99.3050% ( 1) 00:08:07.285 25.009 - 25.108: 99.3459% ( 1) 00:08:07.285 26.978 - 27.175: 99.3868% ( 1) 00:08:07.285 27.175 - 27.372: 99.4276% ( 1) 00:08:07.286 28.160 - 28.357: 99.4685% ( 1) 00:08:07.286 28.751 - 28.948: 99.5094% ( 1) 00:08:07.286 29.735 - 29.932: 99.5503% ( 1) 00:08:07.286 30.523 - 30.720: 99.5912% ( 1) 00:08:07.286 31.311 - 31.508: 99.6321% ( 1) 00:08:07.286 31.902 - 32.098: 99.6729% ( 1) 00:08:07.286 33.477 - 33.674: 99.7138% ( 1) 00:08:07.286 35.643 - 35.840: 99.7547% ( 1) 00:08:07.286 43.520 - 43.717: 99.7956% ( 1) 00:08:07.286 53.563 - 53.957: 99.8365% ( 1) 00:08:07.286 58.683 - 59.077: 99.8774% ( 1) 00:08:07.286 100.825 - 101.612: 99.9182% ( 1) 00:08:07.286 143.360 - 144.148: 99.9591% ( 1) 00:08:07.286 184.320 - 185.108: 100.0000% ( 1) 00:08:07.286 00:08:07.286 00:08:07.286 real 0m1.254s 00:08:07.286 user 0m1.071s 00:08:07.286 sys 0m0.123s 00:08:07.286 ************************************ 00:08:07.286 END TEST nvme_overhead 00:08:07.286 ************************************ 00:08:07.286 16:34:51 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.286 16:34:51 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:07.286 16:34:51 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:07.286 16:34:51 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:07.286 16:34:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.286 16:34:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:07.286 ************************************ 00:08:07.286 START TEST nvme_arbitration 00:08:07.286 ************************************ 00:08:07.286 16:34:51 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:10.588 Initializing NVMe Controllers 00:08:10.588 Attached to 0000:00:10.0 00:08:10.588 Attached to 0000:00:11.0 00:08:10.588 Attached to 0000:00:13.0 00:08:10.588 Attached to 0000:00:12.0 00:08:10.588 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:08:10.588 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:08:10.588 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:08:10.588 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:10.588 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:10.588 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:10.588 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:10.588 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:10.588 Initialization complete. Launching workers. 00:08:10.588 Starting thread on core 1 with urgent priority queue 00:08:10.588 Starting thread on core 2 with urgent priority queue 00:08:10.588 Starting thread on core 3 with urgent priority queue 00:08:10.588 Starting thread on core 0 with urgent priority queue 00:08:10.588 QEMU NVMe Ctrl (12340 ) core 0: 768.00 IO/s 130.21 secs/100000 ios 00:08:10.588 QEMU NVMe Ctrl (12342 ) core 0: 768.00 IO/s 130.21 secs/100000 ios 00:08:10.588 QEMU NVMe Ctrl (12341 ) core 1: 768.00 IO/s 130.21 secs/100000 ios 00:08:10.588 QEMU NVMe Ctrl (12342 ) core 1: 768.00 IO/s 130.21 secs/100000 ios 00:08:10.588 QEMU NVMe Ctrl (12343 ) core 2: 704.00 IO/s 142.05 secs/100000 ios 00:08:10.588 QEMU NVMe Ctrl (12342 ) core 3: 746.67 IO/s 133.93 secs/100000 ios 00:08:10.588 ======================================================== 00:08:10.588 00:08:10.588 00:08:10.588 real 0m3.392s 00:08:10.588 user 0m9.342s 00:08:10.588 sys 0m0.151s 00:08:10.588 16:34:55 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.588 16:34:55 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:10.588 ************************************ 00:08:10.588 END TEST nvme_arbitration 00:08:10.588 ************************************ 00:08:10.588 16:34:55 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:10.588 16:34:55 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:10.588 16:34:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.588 16:34:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:10.588 ************************************ 00:08:10.588 START TEST nvme_single_aen 00:08:10.588 ************************************ 00:08:10.588 16:34:55 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:10.848 Asynchronous Event Request test 00:08:10.848 Attached to 0000:00:10.0 00:08:10.848 Attached to 0000:00:11.0 00:08:10.848 Attached to 0000:00:13.0 00:08:10.848 Attached to 0000:00:12.0 00:08:10.848 Reset controller to setup AER completions for this process 00:08:10.848 Registering asynchronous event callbacks... 00:08:10.848 Getting orig temperature thresholds of all controllers 00:08:10.848 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:10.848 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:10.848 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:10.848 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:10.848 Setting all controllers temperature threshold low to trigger AER 00:08:10.848 Waiting for all controllers temperature threshold to be set lower 00:08:10.848 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:10.848 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:10.848 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:10.848 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:10.848 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:10.848 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:10.848 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:10.848 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:10.848 Waiting for all controllers to trigger AER and reset threshold 00:08:10.848 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:10.848 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:10.848 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:10.848 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:10.848 Cleaning up... 00:08:10.848 ************************************ 00:08:10.848 END TEST nvme_single_aen 00:08:10.848 ************************************ 00:08:10.848 00:08:10.848 real 0m0.256s 00:08:10.848 user 0m0.095s 00:08:10.848 sys 0m0.108s 00:08:10.848 16:34:55 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.848 16:34:55 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:10.848 16:34:55 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:10.848 16:34:55 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:10.848 16:34:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.848 16:34:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:10.848 ************************************ 00:08:10.848 START TEST nvme_doorbell_aers 00:08:10.848 ************************************ 00:08:10.848 16:34:55 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:08:10.848 16:34:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:10.848 16:34:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:10.848 16:34:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:10.848 16:34:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:10.848 16:34:55 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:10.848 16:34:55 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:08:10.848 16:34:55 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:10.848 16:34:55 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:10.848 16:34:55 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:11.125 16:34:55 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:11.125 16:34:55 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:11.125 16:34:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:11.125 16:34:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:11.385 [2024-11-20 16:34:56.006996] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63210) is not found. Dropping the request. 00:08:21.389 Executing: test_write_invalid_db 00:08:21.389 Waiting for AER completion... 00:08:21.389 Failure: test_write_invalid_db 00:08:21.389 00:08:21.389 Executing: test_invalid_db_write_overflow_sq 00:08:21.389 Waiting for AER completion... 00:08:21.389 Failure: test_invalid_db_write_overflow_sq 00:08:21.389 00:08:21.389 Executing: test_invalid_db_write_overflow_cq 00:08:21.389 Waiting for AER completion... 00:08:21.389 Failure: test_invalid_db_write_overflow_cq 00:08:21.389 00:08:21.389 16:35:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:21.389 16:35:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:21.389 [2024-11-20 16:35:06.051316] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63210) is not found. Dropping the request. 00:08:31.390 Executing: test_write_invalid_db 00:08:31.390 Waiting for AER completion... 00:08:31.390 Failure: test_write_invalid_db 00:08:31.390 00:08:31.390 Executing: test_invalid_db_write_overflow_sq 00:08:31.390 Waiting for AER completion... 00:08:31.390 Failure: test_invalid_db_write_overflow_sq 00:08:31.390 00:08:31.390 Executing: test_invalid_db_write_overflow_cq 00:08:31.390 Waiting for AER completion... 00:08:31.390 Failure: test_invalid_db_write_overflow_cq 00:08:31.390 00:08:31.390 16:35:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:31.390 16:35:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:31.390 [2024-11-20 16:35:16.079150] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63210) is not found. Dropping the request. 00:08:41.370 Executing: test_write_invalid_db 00:08:41.370 Waiting for AER completion... 00:08:41.370 Failure: test_write_invalid_db 00:08:41.370 00:08:41.370 Executing: test_invalid_db_write_overflow_sq 00:08:41.370 Waiting for AER completion... 00:08:41.370 Failure: test_invalid_db_write_overflow_sq 00:08:41.370 00:08:41.370 Executing: test_invalid_db_write_overflow_cq 00:08:41.370 Waiting for AER completion... 00:08:41.370 Failure: test_invalid_db_write_overflow_cq 00:08:41.370 00:08:41.370 16:35:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:41.370 16:35:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:41.370 [2024-11-20 16:35:26.119315] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63210) is not found. Dropping the request. 00:08:51.346 Executing: test_write_invalid_db 00:08:51.346 Waiting for AER completion... 00:08:51.346 Failure: test_write_invalid_db 00:08:51.346 00:08:51.346 Executing: test_invalid_db_write_overflow_sq 00:08:51.346 Waiting for AER completion... 00:08:51.346 Failure: test_invalid_db_write_overflow_sq 00:08:51.346 00:08:51.346 Executing: test_invalid_db_write_overflow_cq 00:08:51.346 Waiting for AER completion... 00:08:51.346 Failure: test_invalid_db_write_overflow_cq 00:08:51.346 00:08:51.346 00:08:51.346 real 0m40.217s 00:08:51.346 user 0m34.177s 00:08:51.346 sys 0m5.589s 00:08:51.346 16:35:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.346 16:35:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:51.346 ************************************ 00:08:51.346 END TEST nvme_doorbell_aers 00:08:51.346 ************************************ 00:08:51.346 16:35:35 nvme -- nvme/nvme.sh@97 -- # uname 00:08:51.346 16:35:35 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:51.346 16:35:35 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:51.346 16:35:35 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:51.346 16:35:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.346 16:35:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:51.346 ************************************ 00:08:51.346 START TEST nvme_multi_aen 00:08:51.346 ************************************ 00:08:51.346 16:35:35 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:51.346 [2024-11-20 16:35:36.129070] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63210) is not found. Dropping the request. 00:08:51.346 [2024-11-20 16:35:36.129243] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63210) is not found. Dropping the request. 00:08:51.346 [2024-11-20 16:35:36.129256] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63210) is not found. Dropping the request. 00:08:51.346 [2024-11-20 16:35:36.130410] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63210) is not found. Dropping the request. 00:08:51.346 [2024-11-20 16:35:36.130431] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63210) is not found. Dropping the request. 00:08:51.346 [2024-11-20 16:35:36.130441] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63210) is not found. Dropping the request. 00:08:51.346 [2024-11-20 16:35:36.131331] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63210) is not found. Dropping the request. 00:08:51.346 [2024-11-20 16:35:36.131353] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63210) is not found. Dropping the request. 00:08:51.346 [2024-11-20 16:35:36.131360] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63210) is not found. Dropping the request. 00:08:51.346 [2024-11-20 16:35:36.132191] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63210) is not found. Dropping the request. 00:08:51.346 [2024-11-20 16:35:36.132211] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63210) is not found. Dropping the request. 00:08:51.346 [2024-11-20 16:35:36.132218] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63210) is not found. Dropping the request. 00:08:51.346 Child process pid: 63726 00:08:51.605 [Child] Asynchronous Event Request test 00:08:51.605 [Child] Attached to 0000:00:10.0 00:08:51.605 [Child] Attached to 0000:00:11.0 00:08:51.605 [Child] Attached to 0000:00:13.0 00:08:51.605 [Child] Attached to 0000:00:12.0 00:08:51.605 [Child] Registering asynchronous event callbacks... 00:08:51.605 [Child] Getting orig temperature thresholds of all controllers 00:08:51.605 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:51.605 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:51.605 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:51.605 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:51.605 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:51.605 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:51.605 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:51.605 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:51.605 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:51.605 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:51.605 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:51.605 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:51.605 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:51.605 [Child] Cleaning up... 00:08:51.605 Asynchronous Event Request test 00:08:51.605 Attached to 0000:00:10.0 00:08:51.605 Attached to 0000:00:11.0 00:08:51.605 Attached to 0000:00:13.0 00:08:51.605 Attached to 0000:00:12.0 00:08:51.605 Reset controller to setup AER completions for this process 00:08:51.605 Registering asynchronous event callbacks... 00:08:51.605 Getting orig temperature thresholds of all controllers 00:08:51.605 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:51.605 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:51.605 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:51.605 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:51.605 Setting all controllers temperature threshold low to trigger AER 00:08:51.605 Waiting for all controllers temperature threshold to be set lower 00:08:51.605 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:51.605 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:51.606 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:51.606 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:51.606 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:51.606 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:51.606 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:51.606 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:51.606 Waiting for all controllers to trigger AER and reset threshold 00:08:51.606 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:51.606 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:51.606 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:51.606 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:51.606 Cleaning up... 00:08:51.606 00:08:51.606 real 0m0.422s 00:08:51.606 user 0m0.139s 00:08:51.606 sys 0m0.175s 00:08:51.606 16:35:36 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.606 16:35:36 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:51.606 ************************************ 00:08:51.606 END TEST nvme_multi_aen 00:08:51.606 ************************************ 00:08:51.606 16:35:36 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:51.606 16:35:36 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:51.606 16:35:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.606 16:35:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:51.606 ************************************ 00:08:51.606 START TEST nvme_startup 00:08:51.606 ************************************ 00:08:51.606 16:35:36 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:51.863 Initializing NVMe Controllers 00:08:51.863 Attached to 0000:00:10.0 00:08:51.863 Attached to 0000:00:11.0 00:08:51.863 Attached to 0000:00:13.0 00:08:51.863 Attached to 0000:00:12.0 00:08:51.863 Initialization complete. 00:08:51.863 Time used:147675.297 (us). 00:08:51.863 00:08:51.863 real 0m0.218s 00:08:51.863 user 0m0.069s 00:08:51.863 sys 0m0.102s 00:08:51.863 16:35:36 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.863 16:35:36 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:51.863 ************************************ 00:08:51.863 END TEST nvme_startup 00:08:51.863 ************************************ 00:08:51.863 16:35:36 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:51.863 16:35:36 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:51.864 16:35:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.864 16:35:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:51.864 ************************************ 00:08:51.864 START TEST nvme_multi_secondary 00:08:51.864 ************************************ 00:08:51.864 16:35:36 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:51.864 16:35:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63781 00:08:51.864 16:35:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:51.864 16:35:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63782 00:08:51.864 16:35:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:51.864 16:35:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:55.145 Initializing NVMe Controllers 00:08:55.145 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:55.145 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:55.145 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:55.145 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:55.145 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:55.145 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:55.145 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:55.145 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:55.145 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:55.145 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:55.145 Initialization complete. Launching workers. 00:08:55.145 ======================================================== 00:08:55.145 Latency(us) 00:08:55.145 Device Information : IOPS MiB/s Average min max 00:08:55.145 PCIE (0000:00:10.0) NSID 1 from core 1: 7667.53 29.95 2085.35 1021.50 6847.39 00:08:55.145 PCIE (0000:00:11.0) NSID 1 from core 1: 7667.53 29.95 2086.46 1039.52 6636.09 00:08:55.145 PCIE (0000:00:13.0) NSID 1 from core 1: 7667.53 29.95 2086.56 1032.91 6537.06 00:08:55.145 PCIE (0000:00:12.0) NSID 1 from core 1: 7667.53 29.95 2086.68 921.44 6684.64 00:08:55.145 PCIE (0000:00:12.0) NSID 2 from core 1: 7667.53 29.95 2086.73 1025.75 7102.23 00:08:55.145 PCIE (0000:00:12.0) NSID 3 from core 1: 7667.53 29.95 2086.76 1009.01 7566.86 00:08:55.145 ======================================================== 00:08:55.145 Total : 46005.20 179.71 2086.42 921.44 7566.86 00:08:55.145 00:08:55.404 Initializing NVMe Controllers 00:08:55.404 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:55.404 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:55.404 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:55.404 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:55.404 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:55.404 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:55.404 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:55.404 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:55.404 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:55.404 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:55.404 Initialization complete. Launching workers. 00:08:55.404 ======================================================== 00:08:55.404 Latency(us) 00:08:55.404 Device Information : IOPS MiB/s Average min max 00:08:55.404 PCIE (0000:00:10.0) NSID 1 from core 2: 3306.58 12.92 4837.06 1353.69 13563.42 00:08:55.404 PCIE (0000:00:11.0) NSID 1 from core 2: 3306.58 12.92 4838.47 1317.91 18233.70 00:08:55.404 PCIE (0000:00:13.0) NSID 1 from core 2: 3306.58 12.92 4838.01 1179.39 14205.32 00:08:55.404 PCIE (0000:00:12.0) NSID 1 from core 2: 3306.58 12.92 4838.37 1280.99 14028.79 00:08:55.404 PCIE (0000:00:12.0) NSID 2 from core 2: 3306.58 12.92 4838.11 1049.98 14306.42 00:08:55.404 PCIE (0000:00:12.0) NSID 3 from core 2: 3306.58 12.92 4838.44 892.57 13846.97 00:08:55.404 ======================================================== 00:08:55.404 Total : 19839.51 77.50 4838.08 892.57 18233.70 00:08:55.404 00:08:55.404 16:35:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63781 00:08:57.305 Initializing NVMe Controllers 00:08:57.305 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:57.305 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:57.305 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:57.305 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:57.305 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:57.305 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:57.305 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:57.305 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:57.305 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:57.305 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:57.305 Initialization complete. Launching workers. 00:08:57.305 ======================================================== 00:08:57.305 Latency(us) 00:08:57.305 Device Information : IOPS MiB/s Average min max 00:08:57.305 PCIE (0000:00:10.0) NSID 1 from core 0: 10613.78 41.46 1506.22 669.87 6613.29 00:08:57.305 PCIE (0000:00:11.0) NSID 1 from core 0: 10613.78 41.46 1507.07 680.47 6728.66 00:08:57.305 PCIE (0000:00:13.0) NSID 1 from core 0: 10613.78 41.46 1507.04 670.69 6740.93 00:08:57.305 PCIE (0000:00:12.0) NSID 1 from core 0: 10613.78 41.46 1507.02 628.29 6828.04 00:08:57.305 PCIE (0000:00:12.0) NSID 2 from core 0: 10613.78 41.46 1507.00 596.85 6674.59 00:08:57.305 PCIE (0000:00:12.0) NSID 3 from core 0: 10613.78 41.46 1506.98 563.43 6509.52 00:08:57.305 ======================================================== 00:08:57.305 Total : 63682.69 248.76 1506.89 563.43 6828.04 00:08:57.305 00:08:57.305 16:35:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63782 00:08:57.305 16:35:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63851 00:08:57.305 16:35:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:57.305 16:35:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63852 00:08:57.305 16:35:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:57.305 16:35:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:00.586 Initializing NVMe Controllers 00:09:00.586 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:00.586 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:00.586 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:00.586 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:00.586 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:00.586 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:00.586 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:00.586 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:00.586 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:00.586 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:00.586 Initialization complete. Launching workers. 00:09:00.586 ======================================================== 00:09:00.586 Latency(us) 00:09:00.586 Device Information : IOPS MiB/s Average min max 00:09:00.586 PCIE (0000:00:10.0) NSID 1 from core 1: 8215.48 32.09 1946.19 709.60 5753.05 00:09:00.586 PCIE (0000:00:11.0) NSID 1 from core 1: 8215.48 32.09 1947.16 724.15 6431.56 00:09:00.586 PCIE (0000:00:13.0) NSID 1 from core 1: 8215.48 32.09 1947.26 735.15 5799.97 00:09:00.587 PCIE (0000:00:12.0) NSID 1 from core 1: 8215.48 32.09 1947.23 737.45 6080.32 00:09:00.587 PCIE (0000:00:12.0) NSID 2 from core 1: 8215.48 32.09 1947.26 732.50 6267.76 00:09:00.587 PCIE (0000:00:12.0) NSID 3 from core 1: 8215.48 32.09 1947.25 727.22 6017.62 00:09:00.587 ======================================================== 00:09:00.587 Total : 49292.87 192.55 1947.06 709.60 6431.56 00:09:00.587 00:09:00.587 Initializing NVMe Controllers 00:09:00.587 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:00.587 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:00.587 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:00.587 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:00.587 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:00.587 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:00.587 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:00.587 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:00.587 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:00.587 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:00.587 Initialization complete. Launching workers. 00:09:00.587 ======================================================== 00:09:00.587 Latency(us) 00:09:00.587 Device Information : IOPS MiB/s Average min max 00:09:00.587 PCIE (0000:00:10.0) NSID 1 from core 0: 8080.57 31.56 1978.67 709.80 5220.69 00:09:00.587 PCIE (0000:00:11.0) NSID 1 from core 0: 8080.57 31.56 1979.61 731.05 5347.65 00:09:00.587 PCIE (0000:00:13.0) NSID 1 from core 0: 8080.57 31.56 1979.56 720.19 5589.55 00:09:00.587 PCIE (0000:00:12.0) NSID 1 from core 0: 8080.57 31.56 1979.58 741.28 5549.88 00:09:00.587 PCIE (0000:00:12.0) NSID 2 from core 0: 8080.57 31.56 1979.55 742.46 5728.65 00:09:00.587 PCIE (0000:00:12.0) NSID 3 from core 0: 8080.57 31.56 1979.52 731.01 4907.41 00:09:00.587 ======================================================== 00:09:00.587 Total : 48483.44 189.39 1979.41 709.80 5728.65 00:09:00.587 00:09:02.486 Initializing NVMe Controllers 00:09:02.486 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:02.486 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:02.486 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:02.486 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:02.486 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:02.486 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:02.486 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:02.486 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:02.486 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:02.486 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:02.486 Initialization complete. Launching workers. 00:09:02.486 ======================================================== 00:09:02.486 Latency(us) 00:09:02.486 Device Information : IOPS MiB/s Average min max 00:09:02.486 PCIE (0000:00:10.0) NSID 1 from core 2: 4614.19 18.02 3465.11 734.83 13952.04 00:09:02.486 PCIE (0000:00:11.0) NSID 1 from core 2: 4614.19 18.02 3466.98 746.26 13206.48 00:09:02.486 PCIE (0000:00:13.0) NSID 1 from core 2: 4614.19 18.02 3466.74 743.21 12772.62 00:09:02.486 PCIE (0000:00:12.0) NSID 1 from core 2: 4614.19 18.02 3466.68 709.72 12218.49 00:09:02.486 PCIE (0000:00:12.0) NSID 2 from core 2: 4614.19 18.02 3466.63 664.55 12762.41 00:09:02.486 PCIE (0000:00:12.0) NSID 3 from core 2: 4614.19 18.02 3466.58 603.46 12482.90 00:09:02.486 ======================================================== 00:09:02.486 Total : 27685.13 108.15 3466.45 603.46 13952.04 00:09:02.486 00:09:02.486 16:35:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63851 00:09:02.486 16:35:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63852 00:09:02.486 00:09:02.486 real 0m10.617s 00:09:02.486 user 0m18.411s 00:09:02.486 sys 0m0.615s 00:09:02.486 16:35:47 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.486 16:35:47 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:02.486 ************************************ 00:09:02.486 END TEST nvme_multi_secondary 00:09:02.486 ************************************ 00:09:02.486 16:35:47 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:02.486 16:35:47 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:02.486 16:35:47 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/62796 ]] 00:09:02.486 16:35:47 nvme -- common/autotest_common.sh@1094 -- # kill 62796 00:09:02.486 16:35:47 nvme -- common/autotest_common.sh@1095 -- # wait 62796 00:09:02.486 [2024-11-20 16:35:47.329489] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63724) is not found. Dropping the request. 00:09:02.486 [2024-11-20 16:35:47.329542] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63724) is not found. Dropping the request. 00:09:02.486 [2024-11-20 16:35:47.329563] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63724) is not found. Dropping the request. 00:09:02.486 [2024-11-20 16:35:47.329576] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63724) is not found. Dropping the request. 00:09:02.486 [2024-11-20 16:35:47.331174] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63724) is not found. Dropping the request. 00:09:02.486 [2024-11-20 16:35:47.331212] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63724) is not found. Dropping the request. 00:09:02.486 [2024-11-20 16:35:47.331224] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63724) is not found. Dropping the request. 00:09:02.486 [2024-11-20 16:35:47.331236] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63724) is not found. Dropping the request. 00:09:02.486 [2024-11-20 16:35:47.332818] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63724) is not found. Dropping the request. 00:09:02.486 [2024-11-20 16:35:47.332855] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63724) is not found. Dropping the request. 00:09:02.486 [2024-11-20 16:35:47.332866] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63724) is not found. Dropping the request. 00:09:02.486 [2024-11-20 16:35:47.332877] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63724) is not found. Dropping the request. 00:09:02.486 [2024-11-20 16:35:47.334485] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63724) is not found. Dropping the request. 00:09:02.486 [2024-11-20 16:35:47.334524] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63724) is not found. Dropping the request. 00:09:02.486 [2024-11-20 16:35:47.334535] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63724) is not found. Dropping the request. 00:09:02.486 [2024-11-20 16:35:47.334546] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63724) is not found. Dropping the request. 00:09:02.744 16:35:47 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:09:02.744 16:35:47 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:09:02.744 16:35:47 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:02.744 16:35:47 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:02.744 16:35:47 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.744 16:35:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:02.744 ************************************ 00:09:02.744 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:02.744 ************************************ 00:09:02.744 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:02.744 * Looking for test storage... 00:09:02.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:02.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.745 --rc genhtml_branch_coverage=1 00:09:02.745 --rc genhtml_function_coverage=1 00:09:02.745 --rc genhtml_legend=1 00:09:02.745 --rc geninfo_all_blocks=1 00:09:02.745 --rc geninfo_unexecuted_blocks=1 00:09:02.745 00:09:02.745 ' 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:02.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.745 --rc genhtml_branch_coverage=1 00:09:02.745 --rc genhtml_function_coverage=1 00:09:02.745 --rc genhtml_legend=1 00:09:02.745 --rc geninfo_all_blocks=1 00:09:02.745 --rc geninfo_unexecuted_blocks=1 00:09:02.745 00:09:02.745 ' 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:02.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.745 --rc genhtml_branch_coverage=1 00:09:02.745 --rc genhtml_function_coverage=1 00:09:02.745 --rc genhtml_legend=1 00:09:02.745 --rc geninfo_all_blocks=1 00:09:02.745 --rc geninfo_unexecuted_blocks=1 00:09:02.745 00:09:02.745 ' 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:02.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.745 --rc genhtml_branch_coverage=1 00:09:02.745 --rc genhtml_function_coverage=1 00:09:02.745 --rc genhtml_legend=1 00:09:02.745 --rc geninfo_all_blocks=1 00:09:02.745 --rc geninfo_unexecuted_blocks=1 00:09:02.745 00:09:02.745 ' 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:02.745 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:03.002 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:03.002 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:03.002 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:03.002 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:03.002 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:03.002 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64018 00:09:03.002 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:03.002 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:03.002 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64018 00:09:03.002 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 64018 ']' 00:09:03.002 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.002 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:03.002 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.002 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:03.002 16:35:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:03.002 [2024-11-20 16:35:47.728053] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:09:03.002 [2024-11-20 16:35:47.728173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64018 ] 00:09:03.260 [2024-11-20 16:35:47.896074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:03.260 [2024-11-20 16:35:48.002627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.260 [2024-11-20 16:35:48.002847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:03.260 [2024-11-20 16:35:48.002943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:03.260 [2024-11-20 16:35:48.003090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.824 16:35:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.824 16:35:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:09:03.824 16:35:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:03.824 16:35:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.824 16:35:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:04.082 nvme0n1 00:09:04.082 16:35:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.082 16:35:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:04.082 16:35:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_ZLeUO.txt 00:09:04.082 16:35:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:04.082 16:35:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.082 16:35:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:04.082 true 00:09:04.082 16:35:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.082 16:35:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:04.082 16:35:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732120548 00:09:04.082 16:35:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64041 00:09:04.082 16:35:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:04.082 16:35:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:04.082 16:35:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:05.983 [2024-11-20 16:35:50.742168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:05.983 [2024-11-20 16:35:50.742450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:05.983 [2024-11-20 16:35:50.742474] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:05.983 [2024-11-20 16:35:50.742495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.983 [2024-11-20 16:35:50.744188] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:05.983 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64041 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64041 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64041 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_ZLeUO.txt 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_ZLeUO.txt 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64018 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 64018 ']' 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 64018 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64018 00:09:05.983 killing process with pid 64018 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64018' 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 64018 00:09:05.983 16:35:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 64018 00:09:07.886 16:35:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:07.886 16:35:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:07.886 ************************************ 00:09:07.886 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:07.886 ************************************ 00:09:07.886 00:09:07.886 real 0m4.925s 00:09:07.886 user 0m17.594s 00:09:07.886 sys 0m0.526s 00:09:07.886 16:35:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.886 16:35:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:07.886 16:35:52 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:07.886 16:35:52 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:07.886 16:35:52 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:07.886 16:35:52 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.886 16:35:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:07.886 ************************************ 00:09:07.886 START TEST nvme_fio 00:09:07.886 ************************************ 00:09:07.886 16:35:52 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:09:07.886 16:35:52 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:07.886 16:35:52 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:07.886 16:35:52 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:07.886 16:35:52 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:07.886 16:35:52 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:09:07.886 16:35:52 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:07.886 16:35:52 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:07.886 16:35:52 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:07.886 16:35:52 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:07.886 16:35:52 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:07.886 16:35:52 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:07.886 16:35:52 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:07.886 16:35:52 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:07.886 16:35:52 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:07.886 16:35:52 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:07.886 16:35:52 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:07.886 16:35:52 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:08.144 16:35:52 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:08.144 16:35:52 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:08.145 16:35:52 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:08.145 16:35:52 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:08.145 16:35:52 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:08.145 16:35:52 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:08.145 16:35:52 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:08.145 16:35:52 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:08.145 16:35:52 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:08.145 16:35:52 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:08.145 16:35:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:08.145 16:35:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:08.145 16:35:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:08.145 16:35:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:08.145 16:35:52 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:08.145 16:35:52 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:08.145 16:35:52 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:08.145 16:35:52 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:08.402 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:08.402 fio-3.35 00:09:08.402 Starting 1 thread 00:09:16.578 00:09:16.578 test: (groupid=0, jobs=1): err= 0: pid=64181: Wed Nov 20 16:36:00 2024 00:09:16.578 read: IOPS=22.5k, BW=88.0MiB/s (92.3MB/s)(176MiB/2001msec) 00:09:16.578 slat (nsec): min=3372, max=88477, avg=5141.25, stdev=2264.96 00:09:16.578 clat (usec): min=240, max=7953, avg=2834.12, stdev=778.31 00:09:16.579 lat (usec): min=245, max=7979, avg=2839.26, stdev=779.60 00:09:16.579 clat percentiles (usec): 00:09:16.579 | 1.00th=[ 1680], 5.00th=[ 2212], 10.00th=[ 2376], 20.00th=[ 2474], 00:09:16.579 | 30.00th=[ 2507], 40.00th=[ 2540], 50.00th=[ 2606], 60.00th=[ 2638], 00:09:16.579 | 70.00th=[ 2704], 80.00th=[ 2900], 90.00th=[ 3785], 95.00th=[ 4555], 00:09:16.579 | 99.00th=[ 5997], 99.50th=[ 6456], 99.90th=[ 7111], 99.95th=[ 7570], 00:09:16.579 | 99.99th=[ 7701] 00:09:16.579 bw ( KiB/s): min=86520, max=91512, per=99.23%, avg=89397.33, stdev=2581.91, samples=3 00:09:16.579 iops : min=21630, max=22878, avg=22349.33, stdev=645.48, samples=3 00:09:16.579 write: IOPS=22.4k, BW=87.5MiB/s (91.7MB/s)(175MiB/2001msec); 0 zone resets 00:09:16.579 slat (nsec): min=3447, max=83501, avg=5387.35, stdev=2142.38 00:09:16.579 clat (usec): min=232, max=7756, avg=2845.30, stdev=796.06 00:09:16.579 lat (usec): min=236, max=7763, avg=2850.68, stdev=797.31 00:09:16.579 clat percentiles (usec): 00:09:16.579 | 1.00th=[ 1663], 5.00th=[ 2212], 10.00th=[ 2376], 20.00th=[ 2474], 00:09:16.579 | 30.00th=[ 2507], 40.00th=[ 2540], 50.00th=[ 2606], 60.00th=[ 2638], 00:09:16.579 | 70.00th=[ 2704], 80.00th=[ 2933], 90.00th=[ 3818], 95.00th=[ 4621], 00:09:16.579 | 99.00th=[ 5997], 99.50th=[ 6456], 99.90th=[ 7177], 99.95th=[ 7570], 00:09:16.579 | 99.99th=[ 7701] 00:09:16.579 bw ( KiB/s): min=88400, max=91232, per=100.00%, avg=89610.67, stdev=1459.98, samples=3 00:09:16.579 iops : min=22100, max=22808, avg=22402.67, stdev=364.99, samples=3 00:09:16.579 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.03% 00:09:16.579 lat (msec) : 2=2.71%, 4=89.01%, 10=8.22% 00:09:16.579 cpu : usr=99.15%, sys=0.00%, ctx=4, majf=0, minf=608 00:09:16.579 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:16.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:16.579 issued rwts: total=45070,44816,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.579 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.579 00:09:16.579 Run status group 0 (all jobs): 00:09:16.579 READ: bw=88.0MiB/s (92.3MB/s), 88.0MiB/s-88.0MiB/s (92.3MB/s-92.3MB/s), io=176MiB (185MB), run=2001-2001msec 00:09:16.579 WRITE: bw=87.5MiB/s (91.7MB/s), 87.5MiB/s-87.5MiB/s (91.7MB/s-91.7MB/s), io=175MiB (184MB), run=2001-2001msec 00:09:16.579 ----------------------------------------------------- 00:09:16.579 Suppressions used: 00:09:16.579 count bytes template 00:09:16.579 1 32 /usr/src/fio/parse.c 00:09:16.579 1 8 libtcmalloc_minimal.so 00:09:16.579 ----------------------------------------------------- 00:09:16.579 00:09:16.579 16:36:00 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:16.579 16:36:00 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:16.579 16:36:00 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:16.579 16:36:00 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:16.579 16:36:00 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:16.579 16:36:00 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:16.579 16:36:00 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:16.579 16:36:00 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:16.579 16:36:00 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:16.579 16:36:00 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:16.579 16:36:00 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:16.579 16:36:00 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:16.579 16:36:00 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:16.579 16:36:00 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:16.579 16:36:00 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:16.579 16:36:00 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:16.579 16:36:00 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:16.579 16:36:00 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:16.579 16:36:00 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:16.579 16:36:00 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:16.579 16:36:00 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:16.579 16:36:00 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:16.579 16:36:00 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:16.579 16:36:00 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:16.579 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:16.579 fio-3.35 00:09:16.579 Starting 1 thread 00:09:23.134 00:09:23.134 test: (groupid=0, jobs=1): err= 0: pid=64242: Wed Nov 20 16:36:07 2024 00:09:23.134 read: IOPS=22.9k, BW=89.4MiB/s (93.7MB/s)(179MiB/2001msec) 00:09:23.134 slat (nsec): min=3365, max=96998, avg=5069.01, stdev=2301.93 00:09:23.134 clat (usec): min=258, max=8840, avg=2793.23, stdev=805.49 00:09:23.134 lat (usec): min=263, max=8876, avg=2798.30, stdev=806.85 00:09:23.134 clat percentiles (usec): 00:09:23.134 | 1.00th=[ 1614], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2474], 00:09:23.134 | 30.00th=[ 2507], 40.00th=[ 2540], 50.00th=[ 2573], 60.00th=[ 2638], 00:09:23.134 | 70.00th=[ 2671], 80.00th=[ 2802], 90.00th=[ 3359], 95.00th=[ 4424], 00:09:23.134 | 99.00th=[ 6456], 99.50th=[ 6915], 99.90th=[ 7767], 99.95th=[ 7832], 00:09:23.134 | 99.99th=[ 8717] 00:09:23.134 bw ( KiB/s): min=86552, max=91696, per=98.14%, avg=89845.67, stdev=2859.65, samples=3 00:09:23.134 iops : min=21638, max=22924, avg=22461.33, stdev=714.85, samples=3 00:09:23.134 write: IOPS=22.8k, BW=88.9MiB/s (93.2MB/s)(178MiB/2001msec); 0 zone resets 00:09:23.134 slat (nsec): min=3455, max=75174, avg=5317.69, stdev=2279.06 00:09:23.134 clat (usec): min=227, max=8721, avg=2794.62, stdev=803.86 00:09:23.134 lat (usec): min=232, max=8735, avg=2799.94, stdev=805.25 00:09:23.134 clat percentiles (usec): 00:09:23.134 | 1.00th=[ 1680], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2474], 00:09:23.134 | 30.00th=[ 2507], 40.00th=[ 2540], 50.00th=[ 2573], 60.00th=[ 2638], 00:09:23.134 | 70.00th=[ 2671], 80.00th=[ 2802], 90.00th=[ 3359], 95.00th=[ 4424], 00:09:23.134 | 99.00th=[ 6456], 99.50th=[ 6980], 99.90th=[ 7767], 99.95th=[ 7898], 00:09:23.134 | 99.99th=[ 8455] 00:09:23.134 bw ( KiB/s): min=86280, max=92952, per=98.97%, avg=90064.67, stdev=3425.32, samples=3 00:09:23.134 iops : min=21570, max=23238, avg=22516.00, stdev=856.26, samples=3 00:09:23.134 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.05% 00:09:23.134 lat (msec) : 2=2.50%, 4=90.68%, 10=6.74% 00:09:23.134 cpu : usr=99.15%, sys=0.05%, ctx=2, majf=0, minf=607 00:09:23.134 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:23.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.135 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:23.135 issued rwts: total=45797,45524,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:23.135 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:23.135 00:09:23.135 Run status group 0 (all jobs): 00:09:23.135 READ: bw=89.4MiB/s (93.7MB/s), 89.4MiB/s-89.4MiB/s (93.7MB/s-93.7MB/s), io=179MiB (188MB), run=2001-2001msec 00:09:23.135 WRITE: bw=88.9MiB/s (93.2MB/s), 88.9MiB/s-88.9MiB/s (93.2MB/s-93.2MB/s), io=178MiB (186MB), run=2001-2001msec 00:09:23.395 ----------------------------------------------------- 00:09:23.395 Suppressions used: 00:09:23.395 count bytes template 00:09:23.395 1 32 /usr/src/fio/parse.c 00:09:23.395 1 8 libtcmalloc_minimal.so 00:09:23.395 ----------------------------------------------------- 00:09:23.395 00:09:23.395 16:36:08 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:23.395 16:36:08 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:23.395 16:36:08 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:23.395 16:36:08 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:23.657 16:36:08 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:23.657 16:36:08 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:23.916 16:36:08 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:23.916 16:36:08 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:23.916 16:36:08 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:23.916 16:36:08 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:23.916 16:36:08 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:23.916 16:36:08 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:23.916 16:36:08 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:23.916 16:36:08 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:23.916 16:36:08 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:23.916 16:36:08 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:23.916 16:36:08 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:23.916 16:36:08 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:23.916 16:36:08 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:23.916 16:36:08 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:23.916 16:36:08 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:23.916 16:36:08 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:23.916 16:36:08 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:23.916 16:36:08 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:23.916 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:23.916 fio-3.35 00:09:23.916 Starting 1 thread 00:09:32.056 00:09:32.056 test: (groupid=0, jobs=1): err= 0: pid=64303: Wed Nov 20 16:36:16 2024 00:09:32.056 read: IOPS=13.6k, BW=53.0MiB/s (55.6MB/s)(106MiB/2001msec) 00:09:32.056 slat (nsec): min=3309, max=72430, avg=5879.30, stdev=3160.88 00:09:32.056 clat (usec): min=627, max=104190, avg=3997.91, stdev=5529.33 00:09:32.056 lat (usec): min=641, max=104203, avg=4003.79, stdev=5529.72 00:09:32.056 clat percentiles (usec): 00:09:32.056 | 1.00th=[ 1663], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2606], 00:09:32.056 | 30.00th=[ 2704], 40.00th=[ 2835], 50.00th=[ 2966], 60.00th=[ 3195], 00:09:32.056 | 70.00th=[ 3556], 80.00th=[ 4293], 90.00th=[ 5276], 95.00th=[ 6259], 00:09:32.056 | 99.00th=[ 28443], 99.50th=[ 55837], 99.90th=[102237], 99.95th=[103285], 00:09:32.056 | 99.99th=[103285] 00:09:32.056 bw ( KiB/s): min=32040, max=76552, per=99.27%, avg=53928.00, stdev=22265.13, samples=3 00:09:32.056 iops : min= 8010, max=19138, avg=13482.00, stdev=5566.28, samples=3 00:09:32.056 write: IOPS=13.6k, BW=53.0MiB/s (55.6MB/s)(106MiB/2001msec); 0 zone resets 00:09:32.056 slat (nsec): min=3358, max=92218, avg=6090.08, stdev=3114.33 00:09:32.056 clat (usec): min=671, max=114911, avg=5400.50, stdev=10236.42 00:09:32.056 lat (usec): min=683, max=114923, avg=5406.59, stdev=10236.69 00:09:32.056 clat percentiles (msec): 00:09:32.056 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:09:32.056 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 4], 60.00th=[ 4], 00:09:32.056 | 70.00th=[ 4], 80.00th=[ 5], 90.00th=[ 6], 95.00th=[ 12], 00:09:32.056 | 99.00th=[ 64], 99.50th=[ 68], 99.90th=[ 114], 99.95th=[ 114], 00:09:32.056 | 99.99th=[ 115] 00:09:32.056 bw ( KiB/s): min=31704, max=76392, per=99.42%, avg=53952.00, stdev=22344.62, samples=3 00:09:32.056 iops : min= 7926, max=19098, avg=13488.00, stdev=5586.15, samples=3 00:09:32.056 lat (usec) : 750=0.01%, 1000=0.02% 00:09:32.056 lat (msec) : 2=1.60%, 4=72.96%, 10=21.96%, 20=0.48%, 50=1.92% 00:09:32.056 lat (msec) : 100=0.83%, 250=0.23% 00:09:32.056 cpu : usr=99.05%, sys=0.00%, ctx=3, majf=0, minf=607 00:09:32.056 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:32.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:32.056 issued rwts: total=27175,27148,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.056 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:32.056 00:09:32.056 Run status group 0 (all jobs): 00:09:32.056 READ: bw=53.0MiB/s (55.6MB/s), 53.0MiB/s-53.0MiB/s (55.6MB/s-55.6MB/s), io=106MiB (111MB), run=2001-2001msec 00:09:32.056 WRITE: bw=53.0MiB/s (55.6MB/s), 53.0MiB/s-53.0MiB/s (55.6MB/s-55.6MB/s), io=106MiB (111MB), run=2001-2001msec 00:09:32.317 ----------------------------------------------------- 00:09:32.317 Suppressions used: 00:09:32.317 count bytes template 00:09:32.317 1 32 /usr/src/fio/parse.c 00:09:32.317 1 8 libtcmalloc_minimal.so 00:09:32.317 ----------------------------------------------------- 00:09:32.317 00:09:32.317 16:36:17 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:32.317 16:36:17 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:32.317 16:36:17 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:32.317 16:36:17 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:32.579 16:36:17 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:32.579 16:36:17 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:32.840 16:36:17 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:32.840 16:36:17 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:32.840 16:36:17 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:32.840 16:36:17 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:32.840 16:36:17 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:32.840 16:36:17 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:32.840 16:36:17 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:32.840 16:36:17 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:32.840 16:36:17 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:32.840 16:36:17 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:32.840 16:36:17 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:32.840 16:36:17 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:32.840 16:36:17 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:32.840 16:36:17 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:32.840 16:36:17 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:32.840 16:36:17 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:32.840 16:36:17 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:32.840 16:36:17 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:33.100 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:33.100 fio-3.35 00:09:33.100 Starting 1 thread 00:09:45.337 00:09:45.337 test: (groupid=0, jobs=1): err= 0: pid=64368: Wed Nov 20 16:36:28 2024 00:09:45.337 read: IOPS=14.7k, BW=57.4MiB/s (60.2MB/s)(115MiB/2001msec) 00:09:45.337 slat (usec): min=4, max=108, avg= 6.61, stdev= 3.46 00:09:45.337 clat (usec): min=262, max=62028, avg=4268.09, stdev=2262.40 00:09:45.337 lat (usec): min=267, max=62033, avg=4274.70, stdev=2263.00 00:09:45.337 clat percentiles (usec): 00:09:45.337 | 1.00th=[ 2311], 5.00th=[ 2900], 10.00th=[ 3032], 20.00th=[ 3261], 00:09:45.337 | 30.00th=[ 3425], 40.00th=[ 3654], 50.00th=[ 3884], 60.00th=[ 4146], 00:09:45.337 | 70.00th=[ 4490], 80.00th=[ 5014], 90.00th=[ 5932], 95.00th=[ 6718], 00:09:45.337 | 99.00th=[ 8717], 99.50th=[ 9634], 99.90th=[55313], 99.95th=[55313], 00:09:45.337 | 99.99th=[58983] 00:09:45.337 bw ( KiB/s): min=56224, max=59400, per=98.31%, avg=57762.67, stdev=1590.30, samples=3 00:09:45.337 iops : min=14056, max=14850, avg=14440.67, stdev=397.57, samples=3 00:09:45.338 write: IOPS=14.7k, BW=57.4MiB/s (60.2MB/s)(115MiB/2001msec); 0 zone resets 00:09:45.338 slat (nsec): min=4959, max=85792, avg=6979.23, stdev=3559.18 00:09:45.338 clat (usec): min=341, max=68318, avg=4413.81, stdev=3410.77 00:09:45.338 lat (usec): min=347, max=68323, avg=4420.79, stdev=3411.14 00:09:45.338 clat percentiles (usec): 00:09:45.338 | 1.00th=[ 2442], 5.00th=[ 2933], 10.00th=[ 3064], 20.00th=[ 3294], 00:09:45.338 | 30.00th=[ 3458], 40.00th=[ 3687], 50.00th=[ 3884], 60.00th=[ 4178], 00:09:45.338 | 70.00th=[ 4490], 80.00th=[ 5080], 90.00th=[ 5997], 95.00th=[ 6849], 00:09:45.338 | 99.00th=[ 9110], 99.50th=[10028], 99.90th=[64226], 99.95th=[66847], 00:09:45.338 | 99.99th=[67634] 00:09:45.338 bw ( KiB/s): min=56280, max=58680, per=97.97%, avg=57632.00, stdev=1228.54, samples=3 00:09:45.338 iops : min=14070, max=14670, avg=14408.00, stdev=307.14, samples=3 00:09:45.338 lat (usec) : 500=0.02%, 750=0.02%, 1000=0.02% 00:09:45.338 lat (msec) : 2=0.44%, 4=54.45%, 10=44.65%, 20=0.19%, 100=0.22% 00:09:45.338 cpu : usr=98.65%, sys=0.15%, ctx=3, majf=0, minf=606 00:09:45.338 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:45.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.338 issued rwts: total=29391,29429,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.338 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.338 00:09:45.338 Run status group 0 (all jobs): 00:09:45.338 READ: bw=57.4MiB/s (60.2MB/s), 57.4MiB/s-57.4MiB/s (60.2MB/s-60.2MB/s), io=115MiB (120MB), run=2001-2001msec 00:09:45.338 WRITE: bw=57.4MiB/s (60.2MB/s), 57.4MiB/s-57.4MiB/s (60.2MB/s-60.2MB/s), io=115MiB (121MB), run=2001-2001msec 00:09:45.338 ----------------------------------------------------- 00:09:45.338 Suppressions used: 00:09:45.338 count bytes template 00:09:45.338 1 32 /usr/src/fio/parse.c 00:09:45.338 1 8 libtcmalloc_minimal.so 00:09:45.338 ----------------------------------------------------- 00:09:45.338 00:09:45.338 16:36:28 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:45.338 16:36:28 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:45.338 00:09:45.338 real 0m36.009s 00:09:45.338 user 0m17.189s 00:09:45.338 sys 0m36.618s 00:09:45.338 16:36:28 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.338 16:36:28 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:45.338 ************************************ 00:09:45.338 END TEST nvme_fio 00:09:45.338 ************************************ 00:09:45.338 00:09:45.338 real 1m47.343s 00:09:45.338 user 3m41.589s 00:09:45.338 sys 0m47.841s 00:09:45.338 ************************************ 00:09:45.338 END TEST nvme 00:09:45.338 ************************************ 00:09:45.338 16:36:28 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.338 16:36:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:45.338 16:36:28 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:45.338 16:36:28 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:45.338 16:36:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:45.338 16:36:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.338 16:36:28 -- common/autotest_common.sh@10 -- # set +x 00:09:45.338 ************************************ 00:09:45.338 START TEST nvme_scc 00:09:45.338 ************************************ 00:09:45.338 16:36:28 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:45.338 * Looking for test storage... 00:09:45.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:45.338 16:36:28 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:45.338 16:36:28 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:45.338 16:36:28 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:45.338 16:36:28 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:45.338 16:36:28 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.338 16:36:28 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.338 16:36:28 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.338 16:36:28 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.338 16:36:28 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.338 16:36:28 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.338 16:36:28 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.338 16:36:28 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.338 16:36:28 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.338 16:36:28 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.338 16:36:28 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.338 16:36:28 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:45.338 16:36:28 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:45.338 16:36:28 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.338 16:36:28 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.338 16:36:28 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:45.338 16:36:28 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:45.338 16:36:28 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.338 16:36:28 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:45.338 16:36:28 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.338 16:36:28 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:45.338 16:36:28 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:45.338 16:36:28 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.338 16:36:28 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:45.338 16:36:28 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.338 16:36:28 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.338 16:36:28 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.338 16:36:28 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:45.338 16:36:28 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.338 16:36:28 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:45.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.338 --rc genhtml_branch_coverage=1 00:09:45.338 --rc genhtml_function_coverage=1 00:09:45.338 --rc genhtml_legend=1 00:09:45.338 --rc geninfo_all_blocks=1 00:09:45.338 --rc geninfo_unexecuted_blocks=1 00:09:45.338 00:09:45.338 ' 00:09:45.338 16:36:28 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:45.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.338 --rc genhtml_branch_coverage=1 00:09:45.338 --rc genhtml_function_coverage=1 00:09:45.338 --rc genhtml_legend=1 00:09:45.338 --rc geninfo_all_blocks=1 00:09:45.338 --rc geninfo_unexecuted_blocks=1 00:09:45.338 00:09:45.338 ' 00:09:45.338 16:36:28 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:45.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.338 --rc genhtml_branch_coverage=1 00:09:45.338 --rc genhtml_function_coverage=1 00:09:45.338 --rc genhtml_legend=1 00:09:45.338 --rc geninfo_all_blocks=1 00:09:45.338 --rc geninfo_unexecuted_blocks=1 00:09:45.338 00:09:45.338 ' 00:09:45.338 16:36:28 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:45.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.338 --rc genhtml_branch_coverage=1 00:09:45.338 --rc genhtml_function_coverage=1 00:09:45.338 --rc genhtml_legend=1 00:09:45.338 --rc geninfo_all_blocks=1 00:09:45.338 --rc geninfo_unexecuted_blocks=1 00:09:45.338 00:09:45.338 ' 00:09:45.338 16:36:28 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:45.338 16:36:28 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:45.338 16:36:28 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:45.338 16:36:28 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:45.338 16:36:28 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:45.338 16:36:28 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:45.338 16:36:28 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.338 16:36:28 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.338 16:36:28 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.338 16:36:28 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.338 16:36:28 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.338 16:36:28 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.338 16:36:28 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:45.338 16:36:28 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.338 16:36:28 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:45.338 16:36:28 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:45.338 16:36:28 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:45.338 16:36:28 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:45.338 16:36:28 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:45.338 16:36:28 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:45.338 16:36:28 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:45.338 16:36:28 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:45.338 16:36:28 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:45.338 16:36:28 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:45.338 16:36:28 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:45.338 16:36:28 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:45.338 16:36:28 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:45.338 16:36:28 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:45.338 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:45.338 Waiting for block devices as requested 00:09:45.338 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:45.338 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:45.338 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:45.338 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:50.638 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:50.638 16:36:34 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:50.638 16:36:34 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:50.638 16:36:34 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:50.638 16:36:34 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:50.638 16:36:34 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:50.638 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:50.639 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.640 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.641 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:50.642 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.643 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:50.644 16:36:34 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:50.644 16:36:34 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:50.644 16:36:34 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:50.644 16:36:34 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:50.644 16:36:34 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.645 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.646 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.647 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:50.648 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.649 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.650 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:50.651 16:36:34 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:50.651 16:36:34 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:50.651 16:36:34 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:50.651 16:36:34 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:50.651 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.652 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.653 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.654 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:50.655 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.656 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:50.657 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:50.658 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.659 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.660 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.661 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.662 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.663 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:50.664 16:36:34 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:50.664 16:36:34 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:50.664 16:36:34 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:50.664 16:36:34 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:50.664 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.665 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:50.666 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:50.667 16:36:34 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:50.667 16:36:34 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:50.667 16:36:34 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:50.667 16:36:34 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:50.667 16:36:34 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:50.667 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:51.239 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:51.239 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:51.239 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:51.239 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:51.239 16:36:36 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:51.239 16:36:36 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:51.239 16:36:36 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.239 16:36:36 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:51.239 ************************************ 00:09:51.239 START TEST nvme_simple_copy 00:09:51.239 ************************************ 00:09:51.239 16:36:36 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:51.810 Initializing NVMe Controllers 00:09:51.810 Attaching to 0000:00:10.0 00:09:51.810 Controller supports SCC. Attached to 0000:00:10.0 00:09:51.810 Namespace ID: 1 size: 6GB 00:09:51.810 Initialization complete. 00:09:51.810 00:09:51.810 Controller QEMU NVMe Ctrl (12340 ) 00:09:51.810 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:51.810 Namespace Block Size:4096 00:09:51.810 Writing LBAs 0 to 63 with Random Data 00:09:51.810 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:51.810 LBAs matching Written Data: 64 00:09:51.810 ************************************ 00:09:51.810 END TEST nvme_simple_copy 00:09:51.810 ************************************ 00:09:51.810 00:09:51.810 real 0m0.278s 00:09:51.810 user 0m0.107s 00:09:51.810 sys 0m0.068s 00:09:51.810 16:36:36 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.810 16:36:36 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:51.810 00:09:51.810 real 0m7.883s 00:09:51.810 user 0m1.136s 00:09:51.810 sys 0m1.363s 00:09:51.810 16:36:36 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.810 ************************************ 00:09:51.810 END TEST nvme_scc 00:09:51.810 ************************************ 00:09:51.810 16:36:36 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:51.810 16:36:36 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:51.810 16:36:36 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:51.810 16:36:36 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:51.810 16:36:36 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:51.810 16:36:36 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:51.810 16:36:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:51.810 16:36:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.810 16:36:36 -- common/autotest_common.sh@10 -- # set +x 00:09:51.810 ************************************ 00:09:51.810 START TEST nvme_fdp 00:09:51.810 ************************************ 00:09:51.810 16:36:36 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:51.810 * Looking for test storage... 00:09:51.810 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:51.810 16:36:36 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:51.810 16:36:36 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:51.810 16:36:36 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:51.810 16:36:36 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:51.810 16:36:36 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:51.810 16:36:36 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:51.810 16:36:36 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:51.810 16:36:36 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:51.810 16:36:36 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:51.810 16:36:36 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:51.810 16:36:36 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:51.810 16:36:36 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:51.810 16:36:36 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:51.810 16:36:36 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:51.810 16:36:36 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:51.810 16:36:36 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:51.810 16:36:36 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:51.810 16:36:36 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:51.810 16:36:36 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:51.810 16:36:36 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:51.810 16:36:36 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:51.810 16:36:36 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:51.810 16:36:36 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:51.810 16:36:36 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:51.810 16:36:36 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:51.810 16:36:36 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:51.810 16:36:36 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:51.810 16:36:36 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:51.810 16:36:36 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:51.810 16:36:36 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:51.810 16:36:36 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:51.810 16:36:36 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:51.810 16:36:36 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:51.811 16:36:36 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:51.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.811 --rc genhtml_branch_coverage=1 00:09:51.811 --rc genhtml_function_coverage=1 00:09:51.811 --rc genhtml_legend=1 00:09:51.811 --rc geninfo_all_blocks=1 00:09:51.811 --rc geninfo_unexecuted_blocks=1 00:09:51.811 00:09:51.811 ' 00:09:51.811 16:36:36 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:51.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.811 --rc genhtml_branch_coverage=1 00:09:51.811 --rc genhtml_function_coverage=1 00:09:51.811 --rc genhtml_legend=1 00:09:51.811 --rc geninfo_all_blocks=1 00:09:51.811 --rc geninfo_unexecuted_blocks=1 00:09:51.811 00:09:51.811 ' 00:09:51.811 16:36:36 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:51.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.811 --rc genhtml_branch_coverage=1 00:09:51.811 --rc genhtml_function_coverage=1 00:09:51.811 --rc genhtml_legend=1 00:09:51.811 --rc geninfo_all_blocks=1 00:09:51.811 --rc geninfo_unexecuted_blocks=1 00:09:51.811 00:09:51.811 ' 00:09:51.811 16:36:36 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:51.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.811 --rc genhtml_branch_coverage=1 00:09:51.811 --rc genhtml_function_coverage=1 00:09:51.811 --rc genhtml_legend=1 00:09:51.811 --rc geninfo_all_blocks=1 00:09:51.811 --rc geninfo_unexecuted_blocks=1 00:09:51.811 00:09:51.811 ' 00:09:51.811 16:36:36 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:51.811 16:36:36 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:51.811 16:36:36 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:51.811 16:36:36 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:51.811 16:36:36 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:51.811 16:36:36 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:51.811 16:36:36 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:51.811 16:36:36 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:51.811 16:36:36 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:51.811 16:36:36 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.811 16:36:36 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.811 16:36:36 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.811 16:36:36 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:51.811 16:36:36 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.811 16:36:36 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:51.811 16:36:36 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:51.811 16:36:36 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:51.811 16:36:36 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:51.811 16:36:36 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:51.811 16:36:36 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:51.811 16:36:36 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:51.811 16:36:36 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:51.811 16:36:36 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:51.811 16:36:36 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:51.811 16:36:36 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:52.381 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:52.381 Waiting for block devices as requested 00:09:52.381 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:52.641 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:52.641 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:52.641 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:57.934 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:57.934 16:36:42 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:57.934 16:36:42 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:57.934 16:36:42 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:57.934 16:36:42 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:57.934 16:36:42 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.934 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:57.935 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.936 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.937 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:57.938 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.939 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.940 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.941 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:57.942 16:36:42 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:57.942 16:36:42 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:57.942 16:36:42 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:57.942 16:36:42 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:57.942 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:57.943 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:57.944 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:57.945 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:57.946 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.947 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.948 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:57.949 16:36:42 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:57.949 16:36:42 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:57.949 16:36:42 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:57.949 16:36:42 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.949 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.950 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.951 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.952 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.953 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.954 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.955 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:57.956 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.957 16:36:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:58.220 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.221 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.222 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:58.223 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:58.224 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:58.225 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:58.226 16:36:42 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:58.226 16:36:42 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:58.226 16:36:42 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:58.226 16:36:42 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:58.227 16:36:42 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:58.227 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.228 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.229 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:58.230 16:36:42 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:58.230 16:36:42 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:58.231 16:36:42 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:58.231 16:36:42 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:58.231 16:36:42 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:58.231 16:36:42 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:58.231 16:36:42 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:58.231 16:36:42 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:58.231 16:36:42 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:58.231 16:36:42 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:58.231 16:36:42 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:58.231 16:36:42 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:58.231 16:36:42 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:58.231 16:36:42 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:58.231 16:36:42 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:58.231 16:36:42 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:58.231 16:36:42 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:58.231 16:36:42 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:58.231 16:36:42 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:58.231 16:36:42 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:58.231 16:36:42 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:58.231 16:36:42 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:58.231 16:36:42 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:58.231 16:36:42 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:58.231 16:36:42 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:58.231 16:36:42 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:58.231 16:36:42 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:58.231 16:36:42 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:58.231 16:36:42 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:58.231 16:36:42 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:58.231 16:36:42 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:58.231 16:36:42 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:58.489 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:59.054 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:59.054 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:59.054 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:59.054 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:59.313 16:36:43 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:59.313 16:36:43 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:59.313 16:36:43 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.313 16:36:43 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:59.313 ************************************ 00:09:59.313 START TEST nvme_flexible_data_placement 00:09:59.313 ************************************ 00:09:59.313 16:36:43 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:59.313 Initializing NVMe Controllers 00:09:59.313 Attaching to 0000:00:13.0 00:09:59.313 Controller supports FDP Attached to 0000:00:13.0 00:09:59.313 Namespace ID: 1 Endurance Group ID: 1 00:09:59.313 Initialization complete. 00:09:59.313 00:09:59.313 ================================== 00:09:59.313 == FDP tests for Namespace: #01 == 00:09:59.313 ================================== 00:09:59.313 00:09:59.313 Get Feature: FDP: 00:09:59.313 ================= 00:09:59.313 Enabled: Yes 00:09:59.313 FDP configuration Index: 0 00:09:59.313 00:09:59.313 FDP configurations log page 00:09:59.313 =========================== 00:09:59.313 Number of FDP configurations: 1 00:09:59.313 Version: 0 00:09:59.313 Size: 112 00:09:59.313 FDP Configuration Descriptor: 0 00:09:59.313 Descriptor Size: 96 00:09:59.313 Reclaim Group Identifier format: 2 00:09:59.313 FDP Volatile Write Cache: Not Present 00:09:59.313 FDP Configuration: Valid 00:09:59.313 Vendor Specific Size: 0 00:09:59.313 Number of Reclaim Groups: 2 00:09:59.313 Number of Recalim Unit Handles: 8 00:09:59.313 Max Placement Identifiers: 128 00:09:59.313 Number of Namespaces Suppprted: 256 00:09:59.313 Reclaim unit Nominal Size: 6000000 bytes 00:09:59.313 Estimated Reclaim Unit Time Limit: Not Reported 00:09:59.313 RUH Desc #000: RUH Type: Initially Isolated 00:09:59.313 RUH Desc #001: RUH Type: Initially Isolated 00:09:59.313 RUH Desc #002: RUH Type: Initially Isolated 00:09:59.313 RUH Desc #003: RUH Type: Initially Isolated 00:09:59.313 RUH Desc #004: RUH Type: Initially Isolated 00:09:59.313 RUH Desc #005: RUH Type: Initially Isolated 00:09:59.313 RUH Desc #006: RUH Type: Initially Isolated 00:09:59.313 RUH Desc #007: RUH Type: Initially Isolated 00:09:59.313 00:09:59.313 FDP reclaim unit handle usage log page 00:09:59.313 ====================================== 00:09:59.313 Number of Reclaim Unit Handles: 8 00:09:59.313 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:59.313 RUH Usage Desc #001: RUH Attributes: Unused 00:09:59.313 RUH Usage Desc #002: RUH Attributes: Unused 00:09:59.313 RUH Usage Desc #003: RUH Attributes: Unused 00:09:59.313 RUH Usage Desc #004: RUH Attributes: Unused 00:09:59.313 RUH Usage Desc #005: RUH Attributes: Unused 00:09:59.313 RUH Usage Desc #006: RUH Attributes: Unused 00:09:59.313 RUH Usage Desc #007: RUH Attributes: Unused 00:09:59.313 00:09:59.313 FDP statistics log page 00:09:59.313 ======================= 00:09:59.313 Host bytes with metadata written: 832106496 00:09:59.313 Media bytes with metadata written: 832217088 00:09:59.313 Media bytes erased: 0 00:09:59.313 00:09:59.313 FDP Reclaim unit handle status 00:09:59.313 ============================== 00:09:59.313 Number of RUHS descriptors: 2 00:09:59.313 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000004671 00:09:59.313 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:59.313 00:09:59.313 FDP write on placement id: 0 success 00:09:59.313 00:09:59.313 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:59.313 00:09:59.313 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:59.313 00:09:59.313 Get Feature: FDP Events for Placement handle: #0 00:09:59.313 ======================== 00:09:59.313 Number of FDP Events: 6 00:09:59.313 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:59.313 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:59.313 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:59.313 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:59.313 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:59.313 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:59.313 00:09:59.313 FDP events log page 00:09:59.313 =================== 00:09:59.313 Number of FDP events: 1 00:09:59.313 FDP Event #0: 00:09:59.313 Event Type: RU Not Written to Capacity 00:09:59.313 Placement Identifier: Valid 00:09:59.313 NSID: Valid 00:09:59.313 Location: Valid 00:09:59.313 Placement Identifier: 0 00:09:59.313 Event Timestamp: 5 00:09:59.313 Namespace Identifier: 1 00:09:59.313 Reclaim Group Identifier: 0 00:09:59.313 Reclaim Unit Handle Identifier: 0 00:09:59.313 00:09:59.313 FDP test passed 00:09:59.313 00:09:59.313 real 0m0.235s 00:09:59.313 user 0m0.069s 00:09:59.313 sys 0m0.065s 00:09:59.313 16:36:44 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.313 ************************************ 00:09:59.313 END TEST nvme_flexible_data_placement 00:09:59.313 ************************************ 00:09:59.313 16:36:44 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:59.572 00:09:59.572 real 0m7.700s 00:09:59.572 user 0m1.180s 00:09:59.572 sys 0m1.381s 00:09:59.572 16:36:44 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.572 16:36:44 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:59.572 ************************************ 00:09:59.572 END TEST nvme_fdp 00:09:59.572 ************************************ 00:09:59.572 16:36:44 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:59.572 16:36:44 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:59.572 16:36:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:59.572 16:36:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.572 16:36:44 -- common/autotest_common.sh@10 -- # set +x 00:09:59.572 ************************************ 00:09:59.572 START TEST nvme_rpc 00:09:59.572 ************************************ 00:09:59.572 16:36:44 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:59.572 * Looking for test storage... 00:09:59.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:59.572 16:36:44 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:59.572 16:36:44 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:59.572 16:36:44 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:59.572 16:36:44 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:59.572 16:36:44 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:59.572 16:36:44 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:59.572 16:36:44 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:59.572 16:36:44 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:59.572 16:36:44 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:59.572 16:36:44 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:59.572 16:36:44 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:59.572 16:36:44 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:59.572 16:36:44 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:59.572 16:36:44 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:59.572 16:36:44 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:59.572 16:36:44 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:59.572 16:36:44 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:59.572 16:36:44 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:59.572 16:36:44 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:59.572 16:36:44 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:59.572 16:36:44 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:59.572 16:36:44 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:59.572 16:36:44 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:59.572 16:36:44 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:59.572 16:36:44 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:59.572 16:36:44 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:59.572 16:36:44 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:59.572 16:36:44 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:59.572 16:36:44 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:59.572 16:36:44 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:59.572 16:36:44 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:59.572 16:36:44 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:59.572 16:36:44 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:59.572 16:36:44 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:59.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.572 --rc genhtml_branch_coverage=1 00:09:59.572 --rc genhtml_function_coverage=1 00:09:59.572 --rc genhtml_legend=1 00:09:59.572 --rc geninfo_all_blocks=1 00:09:59.572 --rc geninfo_unexecuted_blocks=1 00:09:59.572 00:09:59.572 ' 00:09:59.572 16:36:44 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:59.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.572 --rc genhtml_branch_coverage=1 00:09:59.572 --rc genhtml_function_coverage=1 00:09:59.572 --rc genhtml_legend=1 00:09:59.572 --rc geninfo_all_blocks=1 00:09:59.572 --rc geninfo_unexecuted_blocks=1 00:09:59.572 00:09:59.572 ' 00:09:59.572 16:36:44 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:59.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.572 --rc genhtml_branch_coverage=1 00:09:59.572 --rc genhtml_function_coverage=1 00:09:59.572 --rc genhtml_legend=1 00:09:59.572 --rc geninfo_all_blocks=1 00:09:59.572 --rc geninfo_unexecuted_blocks=1 00:09:59.572 00:09:59.572 ' 00:09:59.572 16:36:44 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:59.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.572 --rc genhtml_branch_coverage=1 00:09:59.572 --rc genhtml_function_coverage=1 00:09:59.572 --rc genhtml_legend=1 00:09:59.572 --rc geninfo_all_blocks=1 00:09:59.572 --rc geninfo_unexecuted_blocks=1 00:09:59.572 00:09:59.572 ' 00:09:59.572 16:36:44 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:59.572 16:36:44 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:59.572 16:36:44 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:59.572 16:36:44 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:59.572 16:36:44 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:59.572 16:36:44 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:59.572 16:36:44 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:59.572 16:36:44 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:59.572 16:36:44 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:59.572 16:36:44 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:59.573 16:36:44 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:59.831 16:36:44 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:59.831 16:36:44 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:59.831 16:36:44 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:59.831 16:36:44 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:59.831 16:36:44 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65741 00:09:59.831 16:36:44 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:59.831 16:36:44 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:59.831 16:36:44 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65741 00:09:59.831 16:36:44 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65741 ']' 00:09:59.831 16:36:44 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.831 16:36:44 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.831 16:36:44 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.831 16:36:44 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.831 16:36:44 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.831 [2024-11-20 16:36:44.552043] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:09:59.831 [2024-11-20 16:36:44.552164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65741 ] 00:09:59.831 [2024-11-20 16:36:44.713168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:00.089 [2024-11-20 16:36:44.823239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.089 [2024-11-20 16:36:44.823442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.673 16:36:45 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:00.673 16:36:45 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:00.673 16:36:45 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:00.957 Nvme0n1 00:10:00.957 16:36:45 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:00.957 16:36:45 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:01.219 request: 00:10:01.219 { 00:10:01.219 "bdev_name": "Nvme0n1", 00:10:01.220 "filename": "non_existing_file", 00:10:01.220 "method": "bdev_nvme_apply_firmware", 00:10:01.220 "req_id": 1 00:10:01.220 } 00:10:01.220 Got JSON-RPC error response 00:10:01.220 response: 00:10:01.220 { 00:10:01.220 "code": -32603, 00:10:01.220 "message": "open file failed." 00:10:01.220 } 00:10:01.220 16:36:45 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:01.220 16:36:45 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:01.220 16:36:45 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:01.479 16:36:46 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:01.479 16:36:46 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65741 00:10:01.479 16:36:46 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65741 ']' 00:10:01.479 16:36:46 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65741 00:10:01.479 16:36:46 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:10:01.479 16:36:46 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:01.479 16:36:46 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65741 00:10:01.479 16:36:46 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:01.479 16:36:46 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:01.479 16:36:46 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65741' 00:10:01.479 killing process with pid 65741 00:10:01.479 16:36:46 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65741 00:10:01.479 16:36:46 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65741 00:10:02.863 00:10:02.863 real 0m3.362s 00:10:02.863 user 0m6.401s 00:10:02.863 sys 0m0.513s 00:10:02.863 16:36:47 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.863 16:36:47 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.863 ************************************ 00:10:02.863 END TEST nvme_rpc 00:10:02.863 ************************************ 00:10:02.863 16:36:47 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:02.863 16:36:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:02.863 16:36:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.863 16:36:47 -- common/autotest_common.sh@10 -- # set +x 00:10:02.863 ************************************ 00:10:02.863 START TEST nvme_rpc_timeouts 00:10:02.863 ************************************ 00:10:02.863 16:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:02.863 * Looking for test storage... 00:10:02.863 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:02.863 16:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:02.863 16:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:10:02.863 16:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:03.123 16:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:03.123 16:36:47 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:03.123 16:36:47 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:03.123 16:36:47 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:03.123 16:36:47 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:03.123 16:36:47 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:03.123 16:36:47 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:03.123 16:36:47 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:03.123 16:36:47 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:03.123 16:36:47 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:03.123 16:36:47 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:03.123 16:36:47 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:03.123 16:36:47 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:03.123 16:36:47 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:03.123 16:36:47 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:03.123 16:36:47 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:03.123 16:36:47 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:03.123 16:36:47 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:03.123 16:36:47 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:03.123 16:36:47 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:03.123 16:36:47 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:03.123 16:36:47 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:03.123 16:36:47 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:03.123 16:36:47 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:03.123 16:36:47 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:03.123 16:36:47 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:03.123 16:36:47 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:03.123 16:36:47 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:03.123 16:36:47 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:03.123 16:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:03.123 16:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:03.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.123 --rc genhtml_branch_coverage=1 00:10:03.123 --rc genhtml_function_coverage=1 00:10:03.123 --rc genhtml_legend=1 00:10:03.123 --rc geninfo_all_blocks=1 00:10:03.123 --rc geninfo_unexecuted_blocks=1 00:10:03.123 00:10:03.123 ' 00:10:03.123 16:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:03.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.123 --rc genhtml_branch_coverage=1 00:10:03.123 --rc genhtml_function_coverage=1 00:10:03.123 --rc genhtml_legend=1 00:10:03.123 --rc geninfo_all_blocks=1 00:10:03.123 --rc geninfo_unexecuted_blocks=1 00:10:03.123 00:10:03.123 ' 00:10:03.123 16:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:03.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.123 --rc genhtml_branch_coverage=1 00:10:03.123 --rc genhtml_function_coverage=1 00:10:03.123 --rc genhtml_legend=1 00:10:03.123 --rc geninfo_all_blocks=1 00:10:03.123 --rc geninfo_unexecuted_blocks=1 00:10:03.123 00:10:03.123 ' 00:10:03.123 16:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:03.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.123 --rc genhtml_branch_coverage=1 00:10:03.123 --rc genhtml_function_coverage=1 00:10:03.123 --rc genhtml_legend=1 00:10:03.124 --rc geninfo_all_blocks=1 00:10:03.124 --rc geninfo_unexecuted_blocks=1 00:10:03.124 00:10:03.124 ' 00:10:03.124 16:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:03.124 16:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65813 00:10:03.124 16:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65813 00:10:03.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.124 16:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65845 00:10:03.124 16:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:03.124 16:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65845 00:10:03.124 16:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 65845 ']' 00:10:03.124 16:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.124 16:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.124 16:36:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:03.124 16:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.124 16:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.124 16:36:47 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:03.124 [2024-11-20 16:36:47.880269] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:10:03.124 [2024-11-20 16:36:47.880427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65845 ] 00:10:03.385 [2024-11-20 16:36:48.037715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:03.385 [2024-11-20 16:36:48.150562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.385 [2024-11-20 16:36:48.150725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.320 16:36:48 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.320 16:36:48 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:10:04.320 Checking default timeout settings: 00:10:04.320 16:36:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:04.320 16:36:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:04.320 Making settings changes with rpc: 00:10:04.320 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:04.320 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:04.579 Check default vs. modified settings: 00:10:04.579 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:04.579 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65813 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65813 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:04.838 Setting action_on_timeout is changed as expected. 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65813 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65813 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:04.838 Setting timeout_us is changed as expected. 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65813 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65813 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:04.838 Setting timeout_admin_us is changed as expected. 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65813 /tmp/settings_modified_65813 00:10:04.838 16:36:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65845 00:10:04.839 16:36:49 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 65845 ']' 00:10:04.839 16:36:49 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 65845 00:10:04.839 16:36:49 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:10:04.839 16:36:49 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.839 16:36:49 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65845 00:10:04.839 16:36:49 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:04.839 16:36:49 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:04.839 16:36:49 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65845' 00:10:04.839 killing process with pid 65845 00:10:04.839 16:36:49 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 65845 00:10:04.839 16:36:49 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 65845 00:10:06.746 RPC TIMEOUT SETTING TEST PASSED. 00:10:06.746 16:36:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:06.746 00:10:06.746 real 0m3.539s 00:10:06.746 user 0m6.703s 00:10:06.746 sys 0m0.555s 00:10:06.746 ************************************ 00:10:06.746 END TEST nvme_rpc_timeouts 00:10:06.746 ************************************ 00:10:06.746 16:36:51 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.746 16:36:51 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:06.746 16:36:51 -- spdk/autotest.sh@239 -- # uname -s 00:10:06.746 16:36:51 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:06.746 16:36:51 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:06.746 16:36:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:06.746 16:36:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.746 16:36:51 -- common/autotest_common.sh@10 -- # set +x 00:10:06.746 ************************************ 00:10:06.746 START TEST sw_hotplug 00:10:06.746 ************************************ 00:10:06.746 16:36:51 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:06.746 * Looking for test storage... 00:10:06.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:06.746 16:36:51 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:06.746 16:36:51 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:06.746 16:36:51 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:10:06.746 16:36:51 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:06.746 16:36:51 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:06.746 16:36:51 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:06.746 16:36:51 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:06.746 16:36:51 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:06.746 16:36:51 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:06.746 16:36:51 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:06.746 16:36:51 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:06.746 16:36:51 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:06.746 16:36:51 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:06.746 16:36:51 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:06.746 16:36:51 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:06.746 16:36:51 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:06.746 16:36:51 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:06.747 16:36:51 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:06.747 16:36:51 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:06.747 16:36:51 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:06.747 16:36:51 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:06.747 16:36:51 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:06.747 16:36:51 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:06.747 16:36:51 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:06.747 16:36:51 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:06.747 16:36:51 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:06.747 16:36:51 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.747 16:36:51 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:06.747 16:36:51 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:06.747 16:36:51 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:06.747 16:36:51 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:06.747 16:36:51 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:06.747 16:36:51 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.747 16:36:51 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:06.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.747 --rc genhtml_branch_coverage=1 00:10:06.747 --rc genhtml_function_coverage=1 00:10:06.747 --rc genhtml_legend=1 00:10:06.747 --rc geninfo_all_blocks=1 00:10:06.747 --rc geninfo_unexecuted_blocks=1 00:10:06.747 00:10:06.747 ' 00:10:06.747 16:36:51 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:06.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.747 --rc genhtml_branch_coverage=1 00:10:06.747 --rc genhtml_function_coverage=1 00:10:06.747 --rc genhtml_legend=1 00:10:06.747 --rc geninfo_all_blocks=1 00:10:06.747 --rc geninfo_unexecuted_blocks=1 00:10:06.747 00:10:06.747 ' 00:10:06.747 16:36:51 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:06.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.747 --rc genhtml_branch_coverage=1 00:10:06.747 --rc genhtml_function_coverage=1 00:10:06.747 --rc genhtml_legend=1 00:10:06.747 --rc geninfo_all_blocks=1 00:10:06.747 --rc geninfo_unexecuted_blocks=1 00:10:06.747 00:10:06.747 ' 00:10:06.747 16:36:51 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:06.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.747 --rc genhtml_branch_coverage=1 00:10:06.747 --rc genhtml_function_coverage=1 00:10:06.747 --rc genhtml_legend=1 00:10:06.747 --rc geninfo_all_blocks=1 00:10:06.747 --rc geninfo_unexecuted_blocks=1 00:10:06.747 00:10:06.747 ' 00:10:06.747 16:36:51 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:07.007 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:07.007 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:07.007 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:07.007 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:07.007 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:07.007 16:36:51 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:07.007 16:36:51 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:07.007 16:36:51 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:07.007 16:36:51 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:07.007 16:36:51 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:07.007 16:36:51 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:07.007 16:36:51 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:07.007 16:36:51 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:07.007 16:36:51 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:07.007 16:36:51 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:07.007 16:36:51 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:07.007 16:36:51 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:07.007 16:36:51 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:07.007 16:36:51 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:07.007 16:36:51 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:07.007 16:36:51 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:07.007 16:36:51 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:07.007 16:36:51 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:07.007 16:36:51 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:07.007 16:36:51 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:07.007 16:36:51 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:07.007 16:36:51 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:07.007 16:36:51 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:07.007 16:36:51 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:07.007 16:36:51 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:07.007 16:36:51 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:07.268 16:36:51 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:07.269 16:36:51 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:07.269 16:36:51 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:07.269 16:36:51 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:07.269 16:36:51 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:07.269 16:36:51 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:07.269 16:36:51 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:07.269 16:36:51 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:07.269 16:36:51 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:07.269 16:36:51 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:07.269 16:36:51 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:07.269 16:36:51 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:07.529 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:07.790 Waiting for block devices as requested 00:10:07.790 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:07.790 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:07.790 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:07.790 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:13.134 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:13.134 16:36:57 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:13.134 16:36:57 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:13.395 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:13.395 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:13.395 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:13.655 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:13.915 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:13.915 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:14.176 16:36:58 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:14.176 16:36:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:14.176 16:36:58 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:14.177 16:36:58 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:14.177 16:36:58 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66702 00:10:14.177 16:36:58 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:14.177 16:36:58 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:14.177 16:36:58 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:14.177 16:36:58 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:14.177 16:36:58 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:14.177 16:36:58 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:14.177 16:36:58 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:14.177 16:36:58 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:14.177 16:36:58 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:10:14.177 16:36:58 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:14.177 16:36:58 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:14.177 16:36:58 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:14.177 16:36:58 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:14.177 16:36:58 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:14.438 Initializing NVMe Controllers 00:10:14.439 Attaching to 0000:00:10.0 00:10:14.439 Attaching to 0000:00:11.0 00:10:14.439 Attached to 0000:00:10.0 00:10:14.439 Attached to 0000:00:11.0 00:10:14.439 Initialization complete. Starting I/O... 00:10:14.439 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:14.439 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:14.439 00:10:15.381 QEMU NVMe Ctrl (12340 ): 2129 I/Os completed (+2129) 00:10:15.381 QEMU NVMe Ctrl (12341 ): 2129 I/Os completed (+2129) 00:10:15.381 00:10:16.331 QEMU NVMe Ctrl (12340 ): 5152 I/Os completed (+3023) 00:10:16.331 QEMU NVMe Ctrl (12341 ): 5208 I/Os completed (+3079) 00:10:16.331 00:10:17.707 QEMU NVMe Ctrl (12340 ): 8250 I/Os completed (+3098) 00:10:17.707 QEMU NVMe Ctrl (12341 ): 8292 I/Os completed (+3084) 00:10:17.707 00:10:18.640 QEMU NVMe Ctrl (12340 ): 11367 I/Os completed (+3117) 00:10:18.640 QEMU NVMe Ctrl (12341 ): 11424 I/Os completed (+3132) 00:10:18.640 00:10:19.577 QEMU NVMe Ctrl (12340 ): 14489 I/Os completed (+3122) 00:10:19.577 QEMU NVMe Ctrl (12341 ): 14522 I/Os completed (+3098) 00:10:19.577 00:10:20.150 16:37:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:20.150 16:37:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:20.150 16:37:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:20.150 [2024-11-20 16:37:04.951458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:20.150 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:20.150 [2024-11-20 16:37:04.952682] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.150 [2024-11-20 16:37:04.952734] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.150 [2024-11-20 16:37:04.952754] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.150 [2024-11-20 16:37:04.952771] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.150 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:20.150 [2024-11-20 16:37:04.954989] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.150 [2024-11-20 16:37:04.955042] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.150 [2024-11-20 16:37:04.955057] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.150 [2024-11-20 16:37:04.955071] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.150 16:37:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:20.150 16:37:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:20.150 [2024-11-20 16:37:04.970254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:20.150 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:20.150 [2024-11-20 16:37:04.971448] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.150 [2024-11-20 16:37:04.971490] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.150 [2024-11-20 16:37:04.971511] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.150 [2024-11-20 16:37:04.971529] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.150 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:20.150 [2024-11-20 16:37:04.973200] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.150 [2024-11-20 16:37:04.973235] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.150 [2024-11-20 16:37:04.973251] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.150 [2024-11-20 16:37:04.973264] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.150 16:37:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:20.150 16:37:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:20.150 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:20.150 EAL: Scan for (pci) bus failed. 00:10:20.412 16:37:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:20.412 16:37:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:20.412 16:37:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:20.412 00:10:20.412 16:37:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:20.412 16:37:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:20.412 16:37:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:20.412 16:37:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:20.412 16:37:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:20.412 Attaching to 0000:00:10.0 00:10:20.412 Attached to 0000:00:10.0 00:10:20.412 16:37:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:20.412 16:37:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:20.412 16:37:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:20.412 Attaching to 0000:00:11.0 00:10:20.412 Attached to 0000:00:11.0 00:10:21.355 QEMU NVMe Ctrl (12340 ): 2931 I/Os completed (+2931) 00:10:21.355 QEMU NVMe Ctrl (12341 ): 2748 I/Os completed (+2748) 00:10:21.355 00:10:22.294 QEMU NVMe Ctrl (12340 ): 6171 I/Os completed (+3240) 00:10:22.294 QEMU NVMe Ctrl (12341 ): 5844 I/Os completed (+3096) 00:10:22.294 00:10:23.669 QEMU NVMe Ctrl (12340 ): 9300 I/Os completed (+3129) 00:10:23.669 QEMU NVMe Ctrl (12341 ): 9177 I/Os completed (+3333) 00:10:23.669 00:10:24.603 QEMU NVMe Ctrl (12340 ): 12202 I/Os completed (+2902) 00:10:24.603 QEMU NVMe Ctrl (12341 ): 12135 I/Os completed (+2958) 00:10:24.603 00:10:25.547 QEMU NVMe Ctrl (12340 ): 15286 I/Os completed (+3084) 00:10:25.547 QEMU NVMe Ctrl (12341 ): 15223 I/Os completed (+3088) 00:10:25.547 00:10:26.480 QEMU NVMe Ctrl (12340 ): 18269 I/Os completed (+2983) 00:10:26.480 QEMU NVMe Ctrl (12341 ): 18200 I/Os completed (+2977) 00:10:26.480 00:10:27.480 QEMU NVMe Ctrl (12340 ): 21470 I/Os completed (+3201) 00:10:27.480 QEMU NVMe Ctrl (12341 ): 21591 I/Os completed (+3391) 00:10:27.480 00:10:28.413 QEMU NVMe Ctrl (12340 ): 24569 I/Os completed (+3099) 00:10:28.413 QEMU NVMe Ctrl (12341 ): 24659 I/Os completed (+3068) 00:10:28.413 00:10:29.346 QEMU NVMe Ctrl (12340 ): 27749 I/Os completed (+3180) 00:10:29.346 QEMU NVMe Ctrl (12341 ): 27987 I/Os completed (+3328) 00:10:29.346 00:10:30.278 QEMU NVMe Ctrl (12340 ): 30840 I/Os completed (+3091) 00:10:30.278 QEMU NVMe Ctrl (12341 ): 31079 I/Os completed (+3092) 00:10:30.278 00:10:31.650 QEMU NVMe Ctrl (12340 ): 33991 I/Os completed (+3151) 00:10:31.650 QEMU NVMe Ctrl (12341 ): 34175 I/Os completed (+3096) 00:10:31.650 00:10:32.581 QEMU NVMe Ctrl (12340 ): 37097 I/Os completed (+3106) 00:10:32.581 QEMU NVMe Ctrl (12341 ): 37305 I/Os completed (+3130) 00:10:32.581 00:10:32.581 16:37:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:32.581 16:37:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:32.581 16:37:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:32.581 16:37:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:32.581 [2024-11-20 16:37:17.283545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:32.581 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:32.581 [2024-11-20 16:37:17.284681] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.581 [2024-11-20 16:37:17.284734] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.581 [2024-11-20 16:37:17.284753] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.581 [2024-11-20 16:37:17.284771] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.581 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:32.581 [2024-11-20 16:37:17.286721] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.581 [2024-11-20 16:37:17.286768] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.581 [2024-11-20 16:37:17.286782] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.581 [2024-11-20 16:37:17.286796] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.581 16:37:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:32.581 16:37:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:32.581 [2024-11-20 16:37:17.305208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:32.581 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:32.581 [2024-11-20 16:37:17.307804] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.581 [2024-11-20 16:37:17.307846] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.581 [2024-11-20 16:37:17.307868] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.581 [2024-11-20 16:37:17.307884] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.581 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:32.581 [2024-11-20 16:37:17.309545] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.581 [2024-11-20 16:37:17.309585] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.581 [2024-11-20 16:37:17.309599] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.581 [2024-11-20 16:37:17.309614] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.581 16:37:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:32.581 16:37:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:32.581 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:32.581 EAL: Scan for (pci) bus failed. 00:10:32.581 16:37:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:32.581 16:37:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:32.581 16:37:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:32.581 16:37:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:32.840 16:37:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:32.840 16:37:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:32.840 16:37:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:32.840 16:37:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:32.840 Attaching to 0000:00:10.0 00:10:32.840 Attached to 0000:00:10.0 00:10:32.840 16:37:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:32.840 16:37:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:32.840 16:37:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:32.840 Attaching to 0000:00:11.0 00:10:32.840 Attached to 0000:00:11.0 00:10:33.405 QEMU NVMe Ctrl (12340 ): 2125 I/Os completed (+2125) 00:10:33.405 QEMU NVMe Ctrl (12341 ): 1879 I/Os completed (+1879) 00:10:33.405 00:10:34.337 QEMU NVMe Ctrl (12340 ): 5714 I/Os completed (+3589) 00:10:34.337 QEMU NVMe Ctrl (12341 ): 5463 I/Os completed (+3584) 00:10:34.337 00:10:35.709 QEMU NVMe Ctrl (12340 ): 9334 I/Os completed (+3620) 00:10:35.709 QEMU NVMe Ctrl (12341 ): 9075 I/Os completed (+3612) 00:10:35.709 00:10:36.275 QEMU NVMe Ctrl (12340 ): 12543 I/Os completed (+3209) 00:10:36.275 QEMU NVMe Ctrl (12341 ): 12599 I/Os completed (+3524) 00:10:36.275 00:10:37.649 QEMU NVMe Ctrl (12340 ): 15556 I/Os completed (+3013) 00:10:37.649 QEMU NVMe Ctrl (12341 ): 15725 I/Os completed (+3126) 00:10:37.649 00:10:38.581 QEMU NVMe Ctrl (12340 ): 18806 I/Os completed (+3250) 00:10:38.581 QEMU NVMe Ctrl (12341 ): 18992 I/Os completed (+3267) 00:10:38.581 00:10:39.589 QEMU NVMe Ctrl (12340 ): 22326 I/Os completed (+3520) 00:10:39.589 QEMU NVMe Ctrl (12341 ): 22527 I/Os completed (+3535) 00:10:39.589 00:10:40.545 QEMU NVMe Ctrl (12340 ): 25552 I/Os completed (+3226) 00:10:40.545 QEMU NVMe Ctrl (12341 ): 25799 I/Os completed (+3272) 00:10:40.545 00:10:41.489 QEMU NVMe Ctrl (12340 ): 28691 I/Os completed (+3139) 00:10:41.489 QEMU NVMe Ctrl (12341 ): 29004 I/Os completed (+3205) 00:10:41.489 00:10:42.440 QEMU NVMe Ctrl (12340 ): 31954 I/Os completed (+3263) 00:10:42.440 QEMU NVMe Ctrl (12341 ): 32262 I/Os completed (+3258) 00:10:42.440 00:10:43.384 QEMU NVMe Ctrl (12340 ): 35206 I/Os completed (+3252) 00:10:43.384 QEMU NVMe Ctrl (12341 ): 35506 I/Os completed (+3244) 00:10:43.385 00:10:44.352 QEMU NVMe Ctrl (12340 ): 38394 I/Os completed (+3188) 00:10:44.352 QEMU NVMe Ctrl (12341 ): 38698 I/Os completed (+3192) 00:10:44.352 00:10:44.703 16:37:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:44.703 16:37:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:44.703 16:37:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:44.703 16:37:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:44.703 [2024-11-20 16:37:29.542402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:44.703 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:44.703 [2024-11-20 16:37:29.543640] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.703 [2024-11-20 16:37:29.543688] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.703 [2024-11-20 16:37:29.543705] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.703 [2024-11-20 16:37:29.543721] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.703 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:44.703 [2024-11-20 16:37:29.545806] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.703 [2024-11-20 16:37:29.545847] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.703 [2024-11-20 16:37:29.545860] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.703 [2024-11-20 16:37:29.545874] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.703 16:37:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:44.703 16:37:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:44.703 [2024-11-20 16:37:29.570564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:44.703 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:44.703 [2024-11-20 16:37:29.571639] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.703 [2024-11-20 16:37:29.571686] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.703 [2024-11-20 16:37:29.571704] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.703 [2024-11-20 16:37:29.571719] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.703 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:44.703 [2024-11-20 16:37:29.573361] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.703 [2024-11-20 16:37:29.573410] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.703 [2024-11-20 16:37:29.573429] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.703 [2024-11-20 16:37:29.573442] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.703 16:37:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:44.703 16:37:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:44.965 16:37:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:44.965 16:37:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:44.965 16:37:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:44.965 16:37:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:44.965 16:37:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:44.965 16:37:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:44.965 16:37:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:44.965 16:37:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:44.965 Attaching to 0000:00:10.0 00:10:44.965 Attached to 0000:00:10.0 00:10:44.965 16:37:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:45.227 16:37:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:45.227 16:37:29 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:45.227 Attaching to 0000:00:11.0 00:10:45.227 Attached to 0000:00:11.0 00:10:45.227 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:45.227 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:45.227 [2024-11-20 16:37:29.871051] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:57.463 16:37:41 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:57.463 16:37:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:57.463 16:37:41 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.92 00:10:57.463 16:37:41 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.92 00:10:57.463 16:37:41 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:57.463 16:37:41 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.92 00:10:57.463 16:37:41 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.92 2 00:10:57.463 remove_attach_helper took 42.92s to complete (handling 2 nvme drive(s)) 16:37:41 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:04.041 16:37:47 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66702 00:11:04.041 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66702) - No such process 00:11:04.041 16:37:47 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66702 00:11:04.041 16:37:47 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:04.041 16:37:47 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:04.041 16:37:47 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:04.041 16:37:47 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67247 00:11:04.041 16:37:47 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:04.041 16:37:47 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67247 00:11:04.041 16:37:47 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67247 ']' 00:11:04.041 16:37:47 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:04.041 16:37:47 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.041 16:37:47 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:04.041 16:37:47 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.041 16:37:47 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:04.041 16:37:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:04.041 [2024-11-20 16:37:47.949052] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:11:04.041 [2024-11-20 16:37:47.949170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67247 ] 00:11:04.041 [2024-11-20 16:37:48.111154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.041 [2024-11-20 16:37:48.211759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.041 16:37:48 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:04.041 16:37:48 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:11:04.041 16:37:48 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:04.041 16:37:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.041 16:37:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:04.041 16:37:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.041 16:37:48 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:04.041 16:37:48 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:04.041 16:37:48 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:04.041 16:37:48 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:04.041 16:37:48 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:04.041 16:37:48 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:04.041 16:37:48 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:04.041 16:37:48 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:04.041 16:37:48 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:04.041 16:37:48 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:04.041 16:37:48 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:04.041 16:37:48 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:04.041 16:37:48 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:10.593 16:37:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:10.593 16:37:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:10.593 16:37:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:10.593 16:37:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:10.593 16:37:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:10.593 16:37:54 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:10.593 16:37:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:10.593 16:37:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:10.593 16:37:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:10.593 16:37:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:10.593 16:37:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:10.593 16:37:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.593 16:37:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.593 16:37:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.593 16:37:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:10.593 16:37:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:10.593 [2024-11-20 16:37:54.904775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:10.593 [2024-11-20 16:37:54.906113] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.593 [2024-11-20 16:37:54.906150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.593 [2024-11-20 16:37:54.906164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.593 [2024-11-20 16:37:54.906181] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.593 [2024-11-20 16:37:54.906189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.593 [2024-11-20 16:37:54.906197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.593 [2024-11-20 16:37:54.906204] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.593 [2024-11-20 16:37:54.906213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.593 [2024-11-20 16:37:54.906219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.593 [2024-11-20 16:37:54.906230] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.593 [2024-11-20 16:37:54.906236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.593 [2024-11-20 16:37:54.906244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.593 [2024-11-20 16:37:55.304763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:10.593 [2024-11-20 16:37:55.306067] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.593 [2024-11-20 16:37:55.306098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.593 [2024-11-20 16:37:55.306111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.593 [2024-11-20 16:37:55.306125] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.593 [2024-11-20 16:37:55.306134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.593 [2024-11-20 16:37:55.306141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.593 [2024-11-20 16:37:55.306149] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.593 [2024-11-20 16:37:55.306156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.593 [2024-11-20 16:37:55.306164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.593 [2024-11-20 16:37:55.306171] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.593 [2024-11-20 16:37:55.306179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.593 [2024-11-20 16:37:55.306185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.593 16:37:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:10.593 16:37:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:10.593 16:37:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:10.593 16:37:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:10.593 16:37:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:10.593 16:37:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:10.593 16:37:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.593 16:37:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.593 16:37:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.593 16:37:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:10.593 16:37:55 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:10.851 16:37:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:10.851 16:37:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:10.851 16:37:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:10.851 16:37:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:10.851 16:37:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:10.851 16:37:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:10.851 16:37:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:10.851 16:37:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:10.851 16:37:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:10.851 16:37:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:10.851 16:37:55 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:23.044 16:38:07 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:23.044 16:38:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:23.044 16:38:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:23.044 16:38:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:23.044 16:38:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:23.044 16:38:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:23.044 16:38:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.044 16:38:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.044 16:38:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.044 16:38:07 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:23.044 16:38:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:23.044 16:38:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:23.044 16:38:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:23.044 [2024-11-20 16:38:07.704997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:23.044 [2024-11-20 16:38:07.706543] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.044 [2024-11-20 16:38:07.706581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.044 [2024-11-20 16:38:07.706592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.044 [2024-11-20 16:38:07.706610] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.044 [2024-11-20 16:38:07.706618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.044 [2024-11-20 16:38:07.706627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.044 [2024-11-20 16:38:07.706634] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.044 [2024-11-20 16:38:07.706642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.044 [2024-11-20 16:38:07.706649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.044 [2024-11-20 16:38:07.706658] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.044 [2024-11-20 16:38:07.706664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.044 [2024-11-20 16:38:07.706672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.044 16:38:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:23.044 16:38:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:23.044 16:38:07 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:23.044 16:38:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:23.044 16:38:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:23.044 16:38:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:23.044 16:38:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:23.044 16:38:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:23.044 16:38:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.044 16:38:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.044 16:38:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.044 16:38:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:23.044 16:38:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:23.302 [2024-11-20 16:38:08.105010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:23.302 [2024-11-20 16:38:08.106418] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.302 [2024-11-20 16:38:08.106448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.302 [2024-11-20 16:38:08.106462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.303 [2024-11-20 16:38:08.106478] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.303 [2024-11-20 16:38:08.106488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.303 [2024-11-20 16:38:08.106495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.303 [2024-11-20 16:38:08.106504] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.303 [2024-11-20 16:38:08.106511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.303 [2024-11-20 16:38:08.106519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.303 [2024-11-20 16:38:08.106526] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.303 [2024-11-20 16:38:08.106534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.303 [2024-11-20 16:38:08.106540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.562 16:38:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:23.562 16:38:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:23.562 16:38:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:23.562 16:38:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:23.562 16:38:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:23.562 16:38:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:23.562 16:38:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.562 16:38:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.562 16:38:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.562 16:38:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:23.562 16:38:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:23.562 16:38:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:23.562 16:38:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:23.562 16:38:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:23.820 16:38:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:23.820 16:38:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:23.820 16:38:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:23.820 16:38:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:23.820 16:38:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:23.820 16:38:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:23.820 16:38:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:23.820 16:38:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:36.009 16:38:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:36.009 16:38:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:36.009 16:38:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:36.009 16:38:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.009 16:38:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.009 16:38:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.009 16:38:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.009 16:38:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.009 16:38:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.009 16:38:20 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:36.009 16:38:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:36.009 16:38:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:36.009 16:38:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:36.009 16:38:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:36.009 16:38:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:36.009 [2024-11-20 16:38:20.605215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:36.009 [2024-11-20 16:38:20.606635] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.009 [2024-11-20 16:38:20.606736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.009 [2024-11-20 16:38:20.606802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.009 [2024-11-20 16:38:20.606838] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.009 [2024-11-20 16:38:20.606921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.009 [2024-11-20 16:38:20.606977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.009 [2024-11-20 16:38:20.607001] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.009 [2024-11-20 16:38:20.607019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.009 [2024-11-20 16:38:20.607043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.009 [2024-11-20 16:38:20.607180] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.009 [2024-11-20 16:38:20.607198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.010 [2024-11-20 16:38:20.607255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.010 16:38:20 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:36.010 16:38:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:36.010 16:38:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:36.010 16:38:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.010 16:38:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.010 16:38:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.010 16:38:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.010 16:38:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.010 16:38:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.010 16:38:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:36.010 16:38:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:36.267 [2024-11-20 16:38:21.005218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:36.267 [2024-11-20 16:38:21.006562] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.267 [2024-11-20 16:38:21.006593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.267 [2024-11-20 16:38:21.006606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.267 [2024-11-20 16:38:21.006622] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.268 [2024-11-20 16:38:21.006631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.268 [2024-11-20 16:38:21.006638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.268 [2024-11-20 16:38:21.006647] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.268 [2024-11-20 16:38:21.006654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.268 [2024-11-20 16:38:21.006664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.268 [2024-11-20 16:38:21.006671] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.268 [2024-11-20 16:38:21.006679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.268 [2024-11-20 16:38:21.006685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.268 16:38:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:36.268 16:38:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:36.268 16:38:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:36.525 16:38:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.525 16:38:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.525 16:38:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.525 16:38:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.525 16:38:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.525 16:38:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.525 16:38:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:36.525 16:38:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:36.525 16:38:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:36.526 16:38:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:36.526 16:38:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:36.526 16:38:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:36.526 16:38:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:36.526 16:38:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:36.526 16:38:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:36.526 16:38:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:36.526 16:38:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:36.782 16:38:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:36.782 16:38:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:49.000 16:38:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:49.000 16:38:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:49.000 16:38:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:49.000 16:38:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:49.000 16:38:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:49.000 16:38:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:49.000 16:38:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.000 16:38:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.000 16:38:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.000 16:38:33 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:49.000 16:38:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:49.000 16:38:33 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.64 00:11:49.000 16:38:33 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.64 00:11:49.000 16:38:33 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:49.000 16:38:33 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.64 00:11:49.000 16:38:33 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.64 2 00:11:49.000 remove_attach_helper took 44.64s to complete (handling 2 nvme drive(s)) 16:38:33 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:49.000 16:38:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.000 16:38:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.000 16:38:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.000 16:38:33 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:49.000 16:38:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.000 16:38:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.000 16:38:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.000 16:38:33 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:49.000 16:38:33 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:49.000 16:38:33 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:49.000 16:38:33 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:49.000 16:38:33 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:49.000 16:38:33 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:49.000 16:38:33 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:49.000 16:38:33 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:49.000 16:38:33 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:49.000 16:38:33 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:49.000 16:38:33 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:49.000 16:38:33 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:49.000 16:38:33 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:55.573 16:38:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:55.573 16:38:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:55.573 16:38:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:55.573 16:38:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:55.573 16:38:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:55.573 16:38:39 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:55.573 16:38:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:55.573 16:38:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:55.573 16:38:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:55.573 16:38:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:55.573 16:38:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.573 16:38:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:55.573 16:38:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:55.573 16:38:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.573 16:38:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:55.573 16:38:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:55.573 [2024-11-20 16:38:39.572665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:55.573 [2024-11-20 16:38:39.573777] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.573 [2024-11-20 16:38:39.573820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.573 [2024-11-20 16:38:39.573831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.573 [2024-11-20 16:38:39.573849] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.573 [2024-11-20 16:38:39.573857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.573 [2024-11-20 16:38:39.573865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.573 [2024-11-20 16:38:39.573873] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.573 [2024-11-20 16:38:39.573881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.573 [2024-11-20 16:38:39.573888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.573 [2024-11-20 16:38:39.573900] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.573 [2024-11-20 16:38:39.573907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.573 [2024-11-20 16:38:39.573917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.573 16:38:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:55.573 16:38:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:55.573 16:38:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:55.573 16:38:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:55.573 16:38:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:55.573 16:38:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:55.573 16:38:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.573 16:38:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:55.573 [2024-11-20 16:38:40.072657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:55.573 [2024-11-20 16:38:40.073716] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.573 [2024-11-20 16:38:40.073840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.573 [2024-11-20 16:38:40.073858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.573 [2024-11-20 16:38:40.073875] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.573 [2024-11-20 16:38:40.073885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.573 [2024-11-20 16:38:40.073892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.573 [2024-11-20 16:38:40.073902] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.573 [2024-11-20 16:38:40.073909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.573 [2024-11-20 16:38:40.073917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.573 [2024-11-20 16:38:40.073925] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.573 [2024-11-20 16:38:40.073933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.574 [2024-11-20 16:38:40.073939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.574 16:38:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.574 16:38:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:55.574 16:38:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:55.830 16:38:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:55.830 16:38:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:55.830 16:38:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:55.830 16:38:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:55.830 16:38:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:55.830 16:38:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:55.830 16:38:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.830 16:38:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:55.831 16:38:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.831 16:38:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:55.831 16:38:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:55.831 16:38:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:55.831 16:38:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:55.831 16:38:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:56.088 16:38:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:56.088 16:38:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:56.088 16:38:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:56.088 16:38:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:56.088 16:38:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:56.088 16:38:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:56.088 16:38:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:56.088 16:38:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:08.282 16:38:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:08.282 16:38:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:08.282 16:38:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:08.282 16:38:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:08.282 16:38:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:08.282 16:38:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:08.282 16:38:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.282 16:38:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:08.282 16:38:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.282 16:38:52 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:08.282 16:38:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:08.282 16:38:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:08.282 16:38:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:08.282 16:38:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:08.282 16:38:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:08.282 16:38:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:08.282 16:38:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:08.283 16:38:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:08.283 16:38:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:08.283 16:38:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:08.283 16:38:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:08.283 16:38:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.283 16:38:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:08.283 [2024-11-20 16:38:52.972948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:08.283 [2024-11-20 16:38:52.974063] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.283 [2024-11-20 16:38:52.974160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.283 [2024-11-20 16:38:52.974277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.283 [2024-11-20 16:38:52.974349] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.283 [2024-11-20 16:38:52.974387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.283 [2024-11-20 16:38:52.974415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.283 [2024-11-20 16:38:52.974440] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.283 [2024-11-20 16:38:52.974491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.283 [2024-11-20 16:38:52.974517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.283 [2024-11-20 16:38:52.974544] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.283 [2024-11-20 16:38:52.974560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.283 [2024-11-20 16:38:52.974632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.283 16:38:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.283 16:38:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:08.283 16:38:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:08.847 16:38:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:08.847 16:38:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:08.847 16:38:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:08.847 16:38:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:08.847 16:38:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:08.847 16:38:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:08.847 16:38:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.847 16:38:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:08.847 16:38:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.847 16:38:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:08.847 16:38:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:08.847 [2024-11-20 16:38:53.672966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:08.847 [2024-11-20 16:38:53.674018] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.847 [2024-11-20 16:38:53.674051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.847 [2024-11-20 16:38:53.674064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.847 [2024-11-20 16:38:53.674081] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.847 [2024-11-20 16:38:53.674093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.847 [2024-11-20 16:38:53.674100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.847 [2024-11-20 16:38:53.674108] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.847 [2024-11-20 16:38:53.674115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.847 [2024-11-20 16:38:53.674124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.847 [2024-11-20 16:38:53.674132] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.847 [2024-11-20 16:38:53.674140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.847 [2024-11-20 16:38:53.674146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.411 16:38:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:09.412 16:38:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:09.412 16:38:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:09.412 16:38:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:09.412 16:38:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:09.412 16:38:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.412 16:38:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:09.412 16:38:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:09.412 16:38:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.412 16:38:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:09.412 16:38:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:09.412 16:38:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:09.412 16:38:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:09.412 16:38:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:09.412 16:38:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:09.412 16:38:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:09.412 16:38:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:09.412 16:38:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:09.412 16:38:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:09.412 16:38:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:09.412 16:38:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:09.412 16:38:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:21.604 16:39:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:21.604 16:39:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:21.604 16:39:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:21.604 16:39:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:21.604 16:39:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:21.604 16:39:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:21.604 16:39:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.604 16:39:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:21.604 16:39:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.605 16:39:06 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:21.605 16:39:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:21.605 16:39:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:21.605 16:39:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:21.605 16:39:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:21.605 16:39:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:21.605 16:39:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:21.605 16:39:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:21.605 16:39:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:21.605 16:39:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:21.605 16:39:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:21.605 16:39:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:21.605 16:39:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.605 16:39:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:21.605 [2024-11-20 16:39:06.373293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:21.605 [2024-11-20 16:39:06.374545] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:21.605 [2024-11-20 16:39:06.374585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:21.605 [2024-11-20 16:39:06.374599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:21.605 [2024-11-20 16:39:06.374623] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:21.605 [2024-11-20 16:39:06.374633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:21.605 [2024-11-20 16:39:06.374644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:21.605 [2024-11-20 16:39:06.374654] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:21.605 [2024-11-20 16:39:06.374667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:21.605 [2024-11-20 16:39:06.374675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:21.605 [2024-11-20 16:39:06.374687] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:21.605 [2024-11-20 16:39:06.374695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:21.605 [2024-11-20 16:39:06.374706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:21.605 16:39:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.605 16:39:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:21.605 16:39:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:22.180 [2024-11-20 16:39:06.873322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:22.180 [2024-11-20 16:39:06.874637] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.180 [2024-11-20 16:39:06.874676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.180 [2024-11-20 16:39:06.874692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.180 [2024-11-20 16:39:06.874712] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.180 [2024-11-20 16:39:06.874723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.180 [2024-11-20 16:39:06.874732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.180 [2024-11-20 16:39:06.874744] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.180 [2024-11-20 16:39:06.874753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.180 [2024-11-20 16:39:06.874763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.180 [2024-11-20 16:39:06.874773] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.180 [2024-11-20 16:39:06.874786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.180 [2024-11-20 16:39:06.874794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.180 16:39:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:22.180 16:39:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:22.180 16:39:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:22.180 16:39:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:22.180 16:39:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:22.180 16:39:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:22.180 16:39:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.180 16:39:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:22.180 16:39:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.180 16:39:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:22.180 16:39:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:22.441 16:39:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:22.441 16:39:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:22.441 16:39:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:22.441 16:39:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:22.441 16:39:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:22.441 16:39:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:22.441 16:39:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:22.441 16:39:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:22.441 16:39:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:22.441 16:39:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:22.441 16:39:07 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:34.689 16:39:19 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:34.689 16:39:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:34.689 16:39:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:34.689 16:39:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:34.689 16:39:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:34.689 16:39:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.689 16:39:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:34.690 16:39:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:34.690 16:39:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.690 16:39:19 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:34.690 16:39:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:34.690 16:39:19 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.81 00:12:34.690 16:39:19 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.81 00:12:34.690 16:39:19 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:34.690 16:39:19 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.81 00:12:34.690 16:39:19 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.81 2 00:12:34.690 remove_attach_helper took 45.81s to complete (handling 2 nvme drive(s)) 16:39:19 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:34.690 16:39:19 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67247 00:12:34.690 16:39:19 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67247 ']' 00:12:34.690 16:39:19 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67247 00:12:34.690 16:39:19 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:34.690 16:39:19 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:34.690 16:39:19 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67247 00:12:34.690 16:39:19 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:34.690 killing process with pid 67247 00:12:34.690 16:39:19 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:34.690 16:39:19 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67247' 00:12:34.690 16:39:19 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67247 00:12:34.690 16:39:19 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67247 00:12:36.073 16:39:20 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:36.073 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:36.644 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:36.644 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:36.644 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:36.644 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:36.644 ************************************ 00:12:36.644 END TEST sw_hotplug 00:12:36.644 ************************************ 00:12:36.644 00:12:36.644 real 2m30.208s 00:12:36.644 user 1m51.372s 00:12:36.644 sys 0m17.372s 00:12:36.644 16:39:21 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:36.644 16:39:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:36.644 16:39:21 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:36.644 16:39:21 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:36.644 16:39:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:36.644 16:39:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:36.644 16:39:21 -- common/autotest_common.sh@10 -- # set +x 00:12:36.907 ************************************ 00:12:36.907 START TEST nvme_xnvme 00:12:36.907 ************************************ 00:12:36.907 16:39:21 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:36.907 * Looking for test storage... 00:12:36.907 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:36.907 16:39:21 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:36.907 16:39:21 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:36.907 16:39:21 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:36.907 16:39:21 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:36.907 16:39:21 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:36.907 16:39:21 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:36.907 16:39:21 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:36.907 16:39:21 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:36.907 16:39:21 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:36.907 16:39:21 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:36.907 16:39:21 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:36.907 16:39:21 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:36.907 16:39:21 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:36.907 16:39:21 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:36.907 16:39:21 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:36.907 16:39:21 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:36.907 16:39:21 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:36.907 16:39:21 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:36.907 16:39:21 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:36.907 16:39:21 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:36.907 16:39:21 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:36.907 16:39:21 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:36.907 16:39:21 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:36.907 16:39:21 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:36.907 16:39:21 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:36.907 16:39:21 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:36.907 16:39:21 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:36.907 16:39:21 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:36.907 16:39:21 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:36.907 16:39:21 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:36.907 16:39:21 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:36.907 16:39:21 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:36.907 16:39:21 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:36.907 16:39:21 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:36.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.907 --rc genhtml_branch_coverage=1 00:12:36.907 --rc genhtml_function_coverage=1 00:12:36.907 --rc genhtml_legend=1 00:12:36.907 --rc geninfo_all_blocks=1 00:12:36.907 --rc geninfo_unexecuted_blocks=1 00:12:36.907 00:12:36.907 ' 00:12:36.907 16:39:21 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:36.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.907 --rc genhtml_branch_coverage=1 00:12:36.907 --rc genhtml_function_coverage=1 00:12:36.907 --rc genhtml_legend=1 00:12:36.907 --rc geninfo_all_blocks=1 00:12:36.907 --rc geninfo_unexecuted_blocks=1 00:12:36.907 00:12:36.907 ' 00:12:36.908 16:39:21 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:36.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.908 --rc genhtml_branch_coverage=1 00:12:36.908 --rc genhtml_function_coverage=1 00:12:36.908 --rc genhtml_legend=1 00:12:36.908 --rc geninfo_all_blocks=1 00:12:36.908 --rc geninfo_unexecuted_blocks=1 00:12:36.908 00:12:36.908 ' 00:12:36.908 16:39:21 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:36.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.908 --rc genhtml_branch_coverage=1 00:12:36.908 --rc genhtml_function_coverage=1 00:12:36.908 --rc genhtml_legend=1 00:12:36.908 --rc geninfo_all_blocks=1 00:12:36.908 --rc geninfo_unexecuted_blocks=1 00:12:36.908 00:12:36.908 ' 00:12:36.908 16:39:21 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:12:36.908 16:39:21 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:12:36.908 16:39:21 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:36.908 16:39:21 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:12:36.908 16:39:21 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:36.908 16:39:21 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:36.908 16:39:21 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:36.908 16:39:21 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:12:36.908 16:39:21 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:12:36.908 16:39:21 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:36.908 16:39:21 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:36.908 16:39:21 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:36.908 16:39:21 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:36.908 16:39:21 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:12:36.908 16:39:21 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:12:36.908 16:39:21 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:12:36.908 16:39:21 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:12:36.908 16:39:21 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:12:36.908 16:39:21 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:12:36.908 16:39:21 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:36.908 16:39:21 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:36.908 16:39:21 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:36.908 16:39:21 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:36.908 16:39:21 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:36.908 16:39:21 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:36.908 16:39:21 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:12:36.908 16:39:21 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:36.909 #define SPDK_CONFIG_H 00:12:36.909 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:36.909 #define SPDK_CONFIG_APPS 1 00:12:36.909 #define SPDK_CONFIG_ARCH native 00:12:36.909 #define SPDK_CONFIG_ASAN 1 00:12:36.909 #undef SPDK_CONFIG_AVAHI 00:12:36.909 #undef SPDK_CONFIG_CET 00:12:36.909 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:36.909 #define SPDK_CONFIG_COVERAGE 1 00:12:36.909 #define SPDK_CONFIG_CROSS_PREFIX 00:12:36.909 #undef SPDK_CONFIG_CRYPTO 00:12:36.909 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:36.909 #undef SPDK_CONFIG_CUSTOMOCF 00:12:36.909 #undef SPDK_CONFIG_DAOS 00:12:36.909 #define SPDK_CONFIG_DAOS_DIR 00:12:36.909 #define SPDK_CONFIG_DEBUG 1 00:12:36.909 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:36.909 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:12:36.909 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:36.909 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:36.909 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:36.909 #undef SPDK_CONFIG_DPDK_UADK 00:12:36.909 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:36.909 #define SPDK_CONFIG_EXAMPLES 1 00:12:36.909 #undef SPDK_CONFIG_FC 00:12:36.909 #define SPDK_CONFIG_FC_PATH 00:12:36.909 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:36.909 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:36.909 #define SPDK_CONFIG_FSDEV 1 00:12:36.909 #undef SPDK_CONFIG_FUSE 00:12:36.909 #undef SPDK_CONFIG_FUZZER 00:12:36.909 #define SPDK_CONFIG_FUZZER_LIB 00:12:36.909 #undef SPDK_CONFIG_GOLANG 00:12:36.909 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:36.909 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:36.909 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:36.909 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:36.909 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:36.909 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:36.909 #undef SPDK_CONFIG_HAVE_LZ4 00:12:36.909 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:36.909 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:36.909 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:36.909 #define SPDK_CONFIG_IDXD 1 00:12:36.909 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:36.909 #undef SPDK_CONFIG_IPSEC_MB 00:12:36.909 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:36.909 #define SPDK_CONFIG_ISAL 1 00:12:36.909 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:36.909 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:36.909 #define SPDK_CONFIG_LIBDIR 00:12:36.909 #undef SPDK_CONFIG_LTO 00:12:36.909 #define SPDK_CONFIG_MAX_LCORES 128 00:12:36.909 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:36.909 #define SPDK_CONFIG_NVME_CUSE 1 00:12:36.909 #undef SPDK_CONFIG_OCF 00:12:36.909 #define SPDK_CONFIG_OCF_PATH 00:12:36.909 #define SPDK_CONFIG_OPENSSL_PATH 00:12:36.909 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:36.909 #define SPDK_CONFIG_PGO_DIR 00:12:36.909 #undef SPDK_CONFIG_PGO_USE 00:12:36.909 #define SPDK_CONFIG_PREFIX /usr/local 00:12:36.909 #undef SPDK_CONFIG_RAID5F 00:12:36.909 #undef SPDK_CONFIG_RBD 00:12:36.909 #define SPDK_CONFIG_RDMA 1 00:12:36.909 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:36.909 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:36.909 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:36.909 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:36.909 #define SPDK_CONFIG_SHARED 1 00:12:36.909 #undef SPDK_CONFIG_SMA 00:12:36.909 #define SPDK_CONFIG_TESTS 1 00:12:36.909 #undef SPDK_CONFIG_TSAN 00:12:36.909 #define SPDK_CONFIG_UBLK 1 00:12:36.909 #define SPDK_CONFIG_UBSAN 1 00:12:36.909 #undef SPDK_CONFIG_UNIT_TESTS 00:12:36.909 #undef SPDK_CONFIG_URING 00:12:36.909 #define SPDK_CONFIG_URING_PATH 00:12:36.909 #undef SPDK_CONFIG_URING_ZNS 00:12:36.909 #undef SPDK_CONFIG_USDT 00:12:36.909 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:36.909 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:36.909 #undef SPDK_CONFIG_VFIO_USER 00:12:36.909 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:36.909 #define SPDK_CONFIG_VHOST 1 00:12:36.909 #define SPDK_CONFIG_VIRTIO 1 00:12:36.909 #undef SPDK_CONFIG_VTUNE 00:12:36.909 #define SPDK_CONFIG_VTUNE_DIR 00:12:36.909 #define SPDK_CONFIG_WERROR 1 00:12:36.909 #define SPDK_CONFIG_WPDK_DIR 00:12:36.909 #define SPDK_CONFIG_XNVME 1 00:12:36.909 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:36.909 16:39:21 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:36.909 16:39:21 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:36.909 16:39:21 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:36.909 16:39:21 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.909 16:39:21 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.909 16:39:21 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.909 16:39:21 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.909 16:39:21 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.909 16:39:21 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.909 16:39:21 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:36.909 16:39:21 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.909 16:39:21 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:36.909 16:39:21 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:36.909 16:39:21 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:36.909 16:39:21 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:36.909 16:39:21 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:12:36.909 16:39:21 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:12:36.909 16:39:21 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:12:36.909 16:39:21 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:12:36.909 16:39:21 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:12:36.909 16:39:21 nvme_xnvme -- pm/common@68 -- # uname -s 00:12:36.909 16:39:21 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:12:36.909 16:39:21 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:36.909 16:39:21 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:36.909 16:39:21 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:36.909 16:39:21 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:36.909 16:39:21 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:36.909 16:39:21 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:36.909 16:39:21 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:12:36.909 16:39:21 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:36.909 16:39:21 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:36.909 16:39:21 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:36.909 16:39:21 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:36.909 16:39:21 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:12:36.909 16:39:21 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:12:36.909 16:39:21 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:12:36.909 16:39:21 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:36.909 16:39:21 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:12:36.909 16:39:21 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:36.909 16:39:21 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:12:36.909 16:39:21 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:36.909 16:39:21 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:12:36.909 16:39:21 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:36.909 16:39:21 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:12:36.909 16:39:21 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:36.909 16:39:21 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:12:36.909 16:39:21 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:36.909 16:39:21 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:12:36.909 16:39:21 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:36.909 16:39:21 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:12:36.909 16:39:21 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:36.909 16:39:21 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:12:36.909 16:39:21 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:36.909 16:39:21 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:12:36.909 16:39:21 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:36.909 16:39:21 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:12:36.909 16:39:21 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:36.909 16:39:21 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:12:36.909 16:39:21 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:36.909 16:39:21 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:12:36.909 16:39:21 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:36.909 16:39:21 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:12:36.909 16:39:21 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:36.909 16:39:21 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:12:36.909 16:39:21 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:36.910 16:39:21 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 68610 ]] 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 68610 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:36.911 16:39:21 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.FuvmhJ 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.FuvmhJ/tests/xnvme /tmp/spdk.FuvmhJ 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13974949888 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593001984 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260629504 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13974949888 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593001984 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265245696 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91499216896 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=8203563008 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:37.172 * Looking for test storage... 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13974949888 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:37.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:37.172 16:39:21 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:12:37.173 16:39:21 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:37.173 16:39:21 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:37.173 16:39:21 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:37.173 16:39:21 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:37.173 16:39:21 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:12:37.173 16:39:21 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:37.173 16:39:21 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:12:37.173 16:39:21 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:12:37.173 16:39:21 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:12:37.173 16:39:21 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:12:37.173 16:39:21 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:37.173 16:39:21 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:37.173 16:39:21 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:37.173 16:39:21 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:12:37.173 16:39:21 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:37.173 16:39:21 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:37.173 16:39:21 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:37.173 16:39:21 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:37.173 16:39:21 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:37.173 16:39:21 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:37.173 16:39:21 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:37.173 16:39:21 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:37.173 16:39:21 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:37.173 16:39:21 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:37.173 16:39:21 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:37.173 16:39:21 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:37.173 16:39:21 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:37.173 16:39:21 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:37.173 16:39:21 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:37.173 16:39:21 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:37.173 16:39:21 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:37.173 16:39:21 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:37.173 16:39:21 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:37.173 16:39:21 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:37.173 16:39:21 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:37.173 16:39:21 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:37.173 16:39:21 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:37.173 16:39:21 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:37.173 16:39:21 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:37.173 16:39:21 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:37.173 16:39:21 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:37.173 16:39:21 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:37.173 16:39:21 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:37.173 16:39:21 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:37.173 16:39:21 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:37.173 16:39:21 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:37.173 16:39:21 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:37.173 16:39:21 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:37.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.173 --rc genhtml_branch_coverage=1 00:12:37.173 --rc genhtml_function_coverage=1 00:12:37.173 --rc genhtml_legend=1 00:12:37.173 --rc geninfo_all_blocks=1 00:12:37.173 --rc geninfo_unexecuted_blocks=1 00:12:37.173 00:12:37.173 ' 00:12:37.173 16:39:21 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:37.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.173 --rc genhtml_branch_coverage=1 00:12:37.173 --rc genhtml_function_coverage=1 00:12:37.173 --rc genhtml_legend=1 00:12:37.173 --rc geninfo_all_blocks=1 00:12:37.173 --rc geninfo_unexecuted_blocks=1 00:12:37.173 00:12:37.173 ' 00:12:37.173 16:39:21 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:37.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.173 --rc genhtml_branch_coverage=1 00:12:37.173 --rc genhtml_function_coverage=1 00:12:37.173 --rc genhtml_legend=1 00:12:37.173 --rc geninfo_all_blocks=1 00:12:37.173 --rc geninfo_unexecuted_blocks=1 00:12:37.173 00:12:37.173 ' 00:12:37.173 16:39:21 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:37.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.173 --rc genhtml_branch_coverage=1 00:12:37.173 --rc genhtml_function_coverage=1 00:12:37.173 --rc genhtml_legend=1 00:12:37.173 --rc geninfo_all_blocks=1 00:12:37.173 --rc geninfo_unexecuted_blocks=1 00:12:37.173 00:12:37.173 ' 00:12:37.173 16:39:21 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:37.173 16:39:21 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:37.173 16:39:21 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.173 16:39:21 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.173 16:39:21 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.173 16:39:21 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.173 16:39:21 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.173 16:39:21 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.173 16:39:21 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:37.173 16:39:21 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.173 16:39:21 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:12:37.173 16:39:21 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:12:37.173 16:39:21 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:12:37.173 16:39:21 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:12:37.173 16:39:21 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:12:37.173 16:39:21 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:12:37.173 16:39:21 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:12:37.173 16:39:21 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:12:37.173 16:39:21 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:12:37.173 16:39:21 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:12:37.173 16:39:21 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:12:37.173 16:39:21 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:12:37.173 16:39:21 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:12:37.173 16:39:21 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:12:37.173 16:39:21 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:12:37.173 16:39:21 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:12:37.173 16:39:21 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:12:37.173 16:39:21 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:12:37.173 16:39:21 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:12:37.173 16:39:21 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:12:37.173 16:39:21 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:12:37.173 16:39:21 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:37.433 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:37.693 Waiting for block devices as requested 00:12:37.693 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:37.693 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:37.693 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:37.954 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:43.236 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:43.236 16:39:27 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:12:43.236 16:39:28 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:12:43.236 16:39:28 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:12:43.495 16:39:28 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:12:43.495 16:39:28 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:12:43.495 16:39:28 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:12:43.495 16:39:28 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:12:43.495 16:39:28 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:43.495 No valid GPT data, bailing 00:12:43.495 16:39:28 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:43.495 16:39:28 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:12:43.495 16:39:28 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:12:43.495 16:39:28 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:12:43.495 16:39:28 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:12:43.495 16:39:28 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:12:43.495 16:39:28 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:12:43.495 16:39:28 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:12:43.496 16:39:28 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:43.496 16:39:28 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:43.496 16:39:28 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:43.496 16:39:28 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:43.496 16:39:28 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:43.496 16:39:28 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:43.496 16:39:28 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:43.496 16:39:28 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:43.496 16:39:28 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:43.496 16:39:28 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:43.496 16:39:28 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.496 16:39:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:43.496 ************************************ 00:12:43.496 START TEST xnvme_rpc 00:12:43.496 ************************************ 00:12:43.496 16:39:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:43.496 16:39:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:43.496 16:39:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:43.496 16:39:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:43.496 16:39:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:43.496 16:39:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69003 00:12:43.496 16:39:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69003 00:12:43.496 16:39:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69003 ']' 00:12:43.496 16:39:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.496 16:39:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:43.496 16:39:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:43.496 16:39:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.496 16:39:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:43.496 16:39:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.755 [2024-11-20 16:39:28.439661] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:12:43.755 [2024-11-20 16:39:28.439945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69003 ] 00:12:43.755 [2024-11-20 16:39:28.597732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.015 [2024-11-20 16:39:28.699673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.585 xnvme_bdev 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69003 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69003 ']' 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69003 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:44.585 16:39:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69003 00:12:44.908 killing process with pid 69003 00:12:44.908 16:39:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:44.908 16:39:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:44.908 16:39:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69003' 00:12:44.908 16:39:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69003 00:12:44.908 16:39:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69003 00:12:46.342 ************************************ 00:12:46.342 END TEST xnvme_rpc 00:12:46.342 ************************************ 00:12:46.342 00:12:46.342 real 0m2.661s 00:12:46.342 user 0m2.749s 00:12:46.342 sys 0m0.361s 00:12:46.342 16:39:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:46.342 16:39:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.342 16:39:31 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:46.342 16:39:31 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:46.342 16:39:31 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:46.342 16:39:31 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:46.342 ************************************ 00:12:46.342 START TEST xnvme_bdevperf 00:12:46.342 ************************************ 00:12:46.342 16:39:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:46.342 16:39:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:46.342 16:39:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:46.342 16:39:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:46.342 16:39:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:46.342 16:39:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:46.342 16:39:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:46.342 16:39:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:46.342 { 00:12:46.342 "subsystems": [ 00:12:46.342 { 00:12:46.342 "subsystem": "bdev", 00:12:46.342 "config": [ 00:12:46.342 { 00:12:46.342 "params": { 00:12:46.342 "io_mechanism": "libaio", 00:12:46.342 "conserve_cpu": false, 00:12:46.342 "filename": "/dev/nvme0n1", 00:12:46.342 "name": "xnvme_bdev" 00:12:46.342 }, 00:12:46.342 "method": "bdev_xnvme_create" 00:12:46.342 }, 00:12:46.342 { 00:12:46.342 "method": "bdev_wait_for_examine" 00:12:46.342 } 00:12:46.342 ] 00:12:46.342 } 00:12:46.342 ] 00:12:46.342 } 00:12:46.342 [2024-11-20 16:39:31.148454] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:12:46.342 [2024-11-20 16:39:31.148583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69072 ] 00:12:46.604 [2024-11-20 16:39:31.310559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.604 [2024-11-20 16:39:31.412810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.868 Running I/O for 5 seconds... 00:12:49.201 23804.00 IOPS, 92.98 MiB/s [2024-11-20T16:39:35.030Z] 23416.50 IOPS, 91.47 MiB/s [2024-11-20T16:39:35.972Z] 22986.33 IOPS, 89.79 MiB/s [2024-11-20T16:39:37.004Z] 23018.50 IOPS, 89.92 MiB/s 00:12:52.118 Latency(us) 00:12:52.118 [2024-11-20T16:39:37.004Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:52.118 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:52.118 xnvme_bdev : 5.00 23228.74 90.74 0.00 0.00 2750.05 466.31 11090.71 00:12:52.118 [2024-11-20T16:39:37.004Z] =================================================================================================================== 00:12:52.118 [2024-11-20T16:39:37.004Z] Total : 23228.74 90.74 0.00 0.00 2750.05 466.31 11090.71 00:12:52.688 16:39:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:52.688 16:39:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:52.688 16:39:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:52.688 16:39:37 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:52.688 16:39:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:52.688 { 00:12:52.688 "subsystems": [ 00:12:52.688 { 00:12:52.688 "subsystem": "bdev", 00:12:52.688 "config": [ 00:12:52.688 { 00:12:52.688 "params": { 00:12:52.688 "io_mechanism": "libaio", 00:12:52.688 "conserve_cpu": false, 00:12:52.688 "filename": "/dev/nvme0n1", 00:12:52.688 "name": "xnvme_bdev" 00:12:52.688 }, 00:12:52.688 "method": "bdev_xnvme_create" 00:12:52.688 }, 00:12:52.688 { 00:12:52.688 "method": "bdev_wait_for_examine" 00:12:52.689 } 00:12:52.689 ] 00:12:52.689 } 00:12:52.689 ] 00:12:52.689 } 00:12:52.689 [2024-11-20 16:39:37.457488] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:12:52.689 [2024-11-20 16:39:37.457610] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69148 ] 00:12:52.948 [2024-11-20 16:39:37.619254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.948 [2024-11-20 16:39:37.722390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.208 Running I/O for 5 seconds... 00:12:55.537 32548.00 IOPS, 127.14 MiB/s [2024-11-20T16:39:40.994Z] 31566.50 IOPS, 123.31 MiB/s [2024-11-20T16:39:42.375Z] 32748.33 IOPS, 127.92 MiB/s [2024-11-20T16:39:43.317Z] 32123.50 IOPS, 125.48 MiB/s 00:12:58.431 Latency(us) 00:12:58.431 [2024-11-20T16:39:43.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:58.431 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:58.431 xnvme_bdev : 5.00 32034.70 125.14 0.00 0.00 1992.87 244.18 12351.02 00:12:58.431 [2024-11-20T16:39:43.317Z] =================================================================================================================== 00:12:58.431 [2024-11-20T16:39:43.317Z] Total : 32034.70 125.14 0.00 0.00 1992.87 244.18 12351.02 00:12:59.004 00:12:59.004 real 0m12.646s 00:12:59.004 user 0m4.785s 00:12:59.004 sys 0m6.533s 00:12:59.004 16:39:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:59.004 ************************************ 00:12:59.004 END TEST xnvme_bdevperf 00:12:59.004 ************************************ 00:12:59.004 16:39:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:59.004 16:39:43 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:59.004 16:39:43 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:59.004 16:39:43 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:59.004 16:39:43 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:59.004 ************************************ 00:12:59.004 START TEST xnvme_fio_plugin 00:12:59.004 ************************************ 00:12:59.004 16:39:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:59.004 16:39:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:59.004 16:39:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:59.004 16:39:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:59.004 16:39:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:59.004 16:39:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:59.004 16:39:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:59.004 16:39:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:59.004 16:39:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:59.004 16:39:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:59.004 16:39:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:59.004 16:39:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:59.004 16:39:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:59.004 16:39:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:59.004 16:39:43 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:59.004 16:39:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:59.004 16:39:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:59.004 16:39:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:59.004 16:39:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:59.004 16:39:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:59.004 16:39:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:59.004 16:39:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:59.004 16:39:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:59.005 16:39:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:59.005 { 00:12:59.005 "subsystems": [ 00:12:59.005 { 00:12:59.005 "subsystem": "bdev", 00:12:59.005 "config": [ 00:12:59.005 { 00:12:59.005 "params": { 00:12:59.005 "io_mechanism": "libaio", 00:12:59.005 "conserve_cpu": false, 00:12:59.005 "filename": "/dev/nvme0n1", 00:12:59.005 "name": "xnvme_bdev" 00:12:59.005 }, 00:12:59.005 "method": "bdev_xnvme_create" 00:12:59.005 }, 00:12:59.005 { 00:12:59.005 "method": "bdev_wait_for_examine" 00:12:59.005 } 00:12:59.005 ] 00:12:59.005 } 00:12:59.005 ] 00:12:59.005 } 00:12:59.265 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:59.265 fio-3.35 00:12:59.265 Starting 1 thread 00:13:05.853 00:13:05.853 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69267: Wed Nov 20 16:39:49 2024 00:13:05.853 read: IOPS=31.4k, BW=123MiB/s (129MB/s)(613MiB/5001msec) 00:13:05.853 slat (usec): min=4, max=2181, avg=22.76, stdev=100.60 00:13:05.853 clat (usec): min=105, max=5721, avg=1422.01, stdev=568.47 00:13:05.853 lat (usec): min=174, max=5756, avg=1444.77, stdev=559.81 00:13:05.853 clat percentiles (usec): 00:13:05.853 | 1.00th=[ 277], 5.00th=[ 545], 10.00th=[ 717], 20.00th=[ 971], 00:13:05.853 | 30.00th=[ 1123], 40.00th=[ 1270], 50.00th=[ 1401], 60.00th=[ 1516], 00:13:05.853 | 70.00th=[ 1663], 80.00th=[ 1827], 90.00th=[ 2089], 95.00th=[ 2409], 00:13:05.853 | 99.00th=[ 3130], 99.50th=[ 3326], 99.90th=[ 3949], 99.95th=[ 4490], 00:13:05.853 | 99.99th=[ 5604] 00:13:05.853 bw ( KiB/s): min=116384, max=129640, per=98.45%, avg=123616.33, stdev=5473.87, samples=9 00:13:05.853 iops : min=29096, max=32410, avg=30904.00, stdev=1368.42, samples=9 00:13:05.853 lat (usec) : 250=0.71%, 500=3.47%, 750=6.88%, 1000=10.46% 00:13:05.853 lat (msec) : 2=65.65%, 4=12.75%, 10=0.09% 00:13:05.853 cpu : usr=41.56%, sys=50.30%, ctx=12, majf=0, minf=764 00:13:05.853 IO depths : 1=0.4%, 2=1.2%, 4=3.1%, 8=8.4%, 16=23.3%, 32=61.5%, >=64=2.1% 00:13:05.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:05.853 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:13:05.853 issued rwts: total=156987,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:05.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:05.853 00:13:05.853 Run status group 0 (all jobs): 00:13:05.853 READ: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=613MiB (643MB), run=5001-5001msec 00:13:05.853 ----------------------------------------------------- 00:13:05.853 Suppressions used: 00:13:05.853 count bytes template 00:13:05.853 1 11 /usr/src/fio/parse.c 00:13:05.853 1 8 libtcmalloc_minimal.so 00:13:05.853 1 904 libcrypto.so 00:13:05.853 ----------------------------------------------------- 00:13:05.853 00:13:05.853 16:39:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:05.853 16:39:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:05.853 16:39:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:05.853 16:39:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:05.853 16:39:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:05.853 16:39:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:05.853 16:39:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:05.853 16:39:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:05.853 16:39:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:05.853 16:39:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:05.853 16:39:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:05.853 16:39:50 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:05.853 16:39:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:05.853 16:39:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:05.853 16:39:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:05.853 16:39:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:05.853 16:39:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:05.853 16:39:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:05.853 16:39:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:05.854 16:39:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:05.854 16:39:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:05.854 { 00:13:05.854 "subsystems": [ 00:13:05.854 { 00:13:05.854 "subsystem": "bdev", 00:13:05.854 "config": [ 00:13:05.854 { 00:13:05.854 "params": { 00:13:05.854 "io_mechanism": "libaio", 00:13:05.854 "conserve_cpu": false, 00:13:05.854 "filename": "/dev/nvme0n1", 00:13:05.854 "name": "xnvme_bdev" 00:13:05.854 }, 00:13:05.854 "method": "bdev_xnvme_create" 00:13:05.854 }, 00:13:05.854 { 00:13:05.854 "method": "bdev_wait_for_examine" 00:13:05.854 } 00:13:05.854 ] 00:13:05.854 } 00:13:05.854 ] 00:13:05.854 } 00:13:06.113 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:06.113 fio-3.35 00:13:06.113 Starting 1 thread 00:13:12.714 00:13:12.714 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69353: Wed Nov 20 16:39:56 2024 00:13:12.714 write: IOPS=32.2k, BW=126MiB/s (132MB/s)(630MiB/5001msec); 0 zone resets 00:13:12.714 slat (usec): min=4, max=1876, avg=22.42, stdev=99.62 00:13:12.714 clat (usec): min=106, max=4735, avg=1381.54, stdev=550.99 00:13:12.714 lat (usec): min=171, max=4900, avg=1403.96, stdev=541.79 00:13:12.714 clat percentiles (usec): 00:13:12.714 | 1.00th=[ 265], 5.00th=[ 506], 10.00th=[ 701], 20.00th=[ 947], 00:13:12.714 | 30.00th=[ 1106], 40.00th=[ 1237], 50.00th=[ 1352], 60.00th=[ 1483], 00:13:12.714 | 70.00th=[ 1614], 80.00th=[ 1778], 90.00th=[ 2040], 95.00th=[ 2311], 00:13:12.714 | 99.00th=[ 3032], 99.50th=[ 3261], 99.90th=[ 3851], 99.95th=[ 3982], 00:13:12.714 | 99.99th=[ 4555] 00:13:12.714 bw ( KiB/s): min=117768, max=140784, per=100.00%, avg=129413.11, stdev=8347.75, samples=9 00:13:12.714 iops : min=29442, max=35196, avg=32353.22, stdev=2086.93, samples=9 00:13:12.714 lat (usec) : 250=0.82%, 500=4.05%, 750=6.75%, 1000=11.36% 00:13:12.714 lat (msec) : 2=65.77%, 4=11.22%, 10=0.04% 00:13:12.714 cpu : usr=41.68%, sys=50.58%, ctx=15, majf=0, minf=764 00:13:12.714 IO depths : 1=0.5%, 2=1.2%, 4=3.2%, 8=8.7%, 16=23.3%, 32=61.0%, >=64=2.0% 00:13:12.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:12.714 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:13:12.714 issued rwts: total=0,161261,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:12.714 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:12.714 00:13:12.714 Run status group 0 (all jobs): 00:13:12.714 WRITE: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=630MiB (661MB), run=5001-5001msec 00:13:12.714 ----------------------------------------------------- 00:13:12.714 Suppressions used: 00:13:12.714 count bytes template 00:13:12.714 1 11 /usr/src/fio/parse.c 00:13:12.714 1 8 libtcmalloc_minimal.so 00:13:12.714 1 904 libcrypto.so 00:13:12.714 ----------------------------------------------------- 00:13:12.714 00:13:12.714 ************************************ 00:13:12.714 END TEST xnvme_fio_plugin 00:13:12.714 ************************************ 00:13:12.714 00:13:12.714 real 0m13.531s 00:13:12.714 user 0m6.831s 00:13:12.714 sys 0m5.532s 00:13:12.714 16:39:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:12.714 16:39:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:12.714 16:39:57 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:12.714 16:39:57 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:12.714 16:39:57 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:12.714 16:39:57 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:12.714 16:39:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:12.714 16:39:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:12.714 16:39:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:12.714 ************************************ 00:13:12.714 START TEST xnvme_rpc 00:13:12.714 ************************************ 00:13:12.714 16:39:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:12.714 16:39:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:12.714 16:39:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:12.714 16:39:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:12.714 16:39:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:12.714 16:39:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69438 00:13:12.714 16:39:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69438 00:13:12.714 16:39:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69438 ']' 00:13:12.714 16:39:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.714 16:39:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:12.714 16:39:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.714 16:39:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:12.714 16:39:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:12.714 16:39:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.714 [2024-11-20 16:39:57.456736] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:13:12.714 [2024-11-20 16:39:57.456857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69438 ] 00:13:12.974 [2024-11-20 16:39:57.618615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.974 [2024-11-20 16:39:57.773229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.544 16:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:13.544 16:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:13.544 16:39:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:13:13.544 16:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.544 16:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.544 xnvme_bdev 00:13:13.544 16:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.544 16:39:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:13.544 16:39:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:13.544 16:39:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:13.544 16:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.544 16:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69438 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69438 ']' 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69438 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69438 00:13:13.804 killing process with pid 69438 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69438' 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69438 00:13:13.804 16:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69438 00:13:15.198 00:13:15.198 real 0m2.677s 00:13:15.198 user 0m2.737s 00:13:15.198 sys 0m0.377s 00:13:15.198 ************************************ 00:13:15.198 END TEST xnvme_rpc 00:13:15.198 ************************************ 00:13:15.198 16:40:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:15.198 16:40:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.457 16:40:00 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:15.457 16:40:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:15.457 16:40:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:15.457 16:40:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:15.457 ************************************ 00:13:15.457 START TEST xnvme_bdevperf 00:13:15.457 ************************************ 00:13:15.458 16:40:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:15.458 16:40:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:15.458 16:40:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:13:15.458 16:40:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:15.458 16:40:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:15.458 16:40:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:15.458 16:40:00 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:15.458 16:40:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:15.458 { 00:13:15.458 "subsystems": [ 00:13:15.458 { 00:13:15.458 "subsystem": "bdev", 00:13:15.458 "config": [ 00:13:15.458 { 00:13:15.458 "params": { 00:13:15.458 "io_mechanism": "libaio", 00:13:15.458 "conserve_cpu": true, 00:13:15.458 "filename": "/dev/nvme0n1", 00:13:15.458 "name": "xnvme_bdev" 00:13:15.458 }, 00:13:15.458 "method": "bdev_xnvme_create" 00:13:15.458 }, 00:13:15.458 { 00:13:15.458 "method": "bdev_wait_for_examine" 00:13:15.458 } 00:13:15.458 ] 00:13:15.458 } 00:13:15.458 ] 00:13:15.458 } 00:13:15.458 [2024-11-20 16:40:00.191406] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:13:15.458 [2024-11-20 16:40:00.191533] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69508 ] 00:13:15.717 [2024-11-20 16:40:00.352406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.717 [2024-11-20 16:40:00.456068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.976 Running I/O for 5 seconds... 00:13:17.860 29304.00 IOPS, 114.47 MiB/s [2024-11-20T16:40:04.131Z] 29928.50 IOPS, 116.91 MiB/s [2024-11-20T16:40:05.073Z] 29868.67 IOPS, 116.67 MiB/s [2024-11-20T16:40:06.008Z] 29731.75 IOPS, 116.14 MiB/s [2024-11-20T16:40:06.008Z] 29361.60 IOPS, 114.69 MiB/s 00:13:21.122 Latency(us) 00:13:21.122 [2024-11-20T16:40:06.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:21.122 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:21.122 xnvme_bdev : 5.01 29344.97 114.63 0.00 0.00 2176.04 450.56 9074.22 00:13:21.122 [2024-11-20T16:40:06.008Z] =================================================================================================================== 00:13:21.122 [2024-11-20T16:40:06.008Z] Total : 29344.97 114.63 0.00 0.00 2176.04 450.56 9074.22 00:13:21.690 16:40:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:21.690 16:40:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:21.690 16:40:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:21.690 16:40:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:21.690 16:40:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:21.690 { 00:13:21.690 "subsystems": [ 00:13:21.690 { 00:13:21.690 "subsystem": "bdev", 00:13:21.690 "config": [ 00:13:21.690 { 00:13:21.690 "params": { 00:13:21.690 "io_mechanism": "libaio", 00:13:21.690 "conserve_cpu": true, 00:13:21.690 "filename": "/dev/nvme0n1", 00:13:21.690 "name": "xnvme_bdev" 00:13:21.690 }, 00:13:21.690 "method": "bdev_xnvme_create" 00:13:21.690 }, 00:13:21.690 { 00:13:21.690 "method": "bdev_wait_for_examine" 00:13:21.690 } 00:13:21.690 ] 00:13:21.690 } 00:13:21.690 ] 00:13:21.690 } 00:13:21.690 [2024-11-20 16:40:06.533563] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:13:21.690 [2024-11-20 16:40:06.533694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69583 ] 00:13:21.951 [2024-11-20 16:40:06.692710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.951 [2024-11-20 16:40:06.802629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.210 Running I/O for 5 seconds... 00:13:24.533 30002.00 IOPS, 117.20 MiB/s [2024-11-20T16:40:10.360Z] 29559.50 IOPS, 115.47 MiB/s [2024-11-20T16:40:11.301Z] 28857.67 IOPS, 112.73 MiB/s [2024-11-20T16:40:12.243Z] 24641.25 IOPS, 96.25 MiB/s [2024-11-20T16:40:12.529Z] 20503.00 IOPS, 80.09 MiB/s 00:13:27.643 Latency(us) 00:13:27.643 [2024-11-20T16:40:12.529Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.643 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:27.643 xnvme_bdev : 5.34 19200.51 75.00 0.00 0.00 3126.80 55.93 341997.10 00:13:27.643 [2024-11-20T16:40:12.529Z] =================================================================================================================== 00:13:27.643 [2024-11-20T16:40:12.529Z] Total : 19200.51 75.00 0.00 0.00 3126.80 55.93 341997.10 00:13:28.590 00:13:28.590 real 0m13.033s 00:13:28.590 user 0m6.522s 00:13:28.590 sys 0m5.288s 00:13:28.590 16:40:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:28.590 ************************************ 00:13:28.590 END TEST xnvme_bdevperf 00:13:28.590 ************************************ 00:13:28.590 16:40:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:28.590 16:40:13 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:28.590 16:40:13 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:28.590 16:40:13 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:28.590 16:40:13 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:28.590 ************************************ 00:13:28.590 START TEST xnvme_fio_plugin 00:13:28.590 ************************************ 00:13:28.590 16:40:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:28.590 16:40:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:28.590 16:40:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:28.590 16:40:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:28.590 16:40:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:28.590 16:40:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:28.590 16:40:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:28.590 16:40:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:28.590 16:40:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:28.590 16:40:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:28.590 16:40:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:28.590 16:40:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:28.590 16:40:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:28.590 16:40:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:28.590 16:40:13 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:28.590 16:40:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:28.590 16:40:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:28.590 16:40:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:28.590 16:40:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:28.590 16:40:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:28.590 16:40:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:28.590 16:40:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:28.590 16:40:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:28.590 16:40:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:28.591 { 00:13:28.591 "subsystems": [ 00:13:28.591 { 00:13:28.591 "subsystem": "bdev", 00:13:28.591 "config": [ 00:13:28.591 { 00:13:28.591 "params": { 00:13:28.591 "io_mechanism": "libaio", 00:13:28.591 "conserve_cpu": true, 00:13:28.591 "filename": "/dev/nvme0n1", 00:13:28.591 "name": "xnvme_bdev" 00:13:28.591 }, 00:13:28.591 "method": "bdev_xnvme_create" 00:13:28.591 }, 00:13:28.591 { 00:13:28.591 "method": "bdev_wait_for_examine" 00:13:28.591 } 00:13:28.591 ] 00:13:28.591 } 00:13:28.591 ] 00:13:28.591 } 00:13:28.591 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:28.591 fio-3.35 00:13:28.591 Starting 1 thread 00:13:35.250 00:13:35.251 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69709: Wed Nov 20 16:40:19 2024 00:13:35.251 read: IOPS=36.8k, BW=144MiB/s (151MB/s)(720MiB/5002msec) 00:13:35.251 slat (usec): min=4, max=1926, avg=20.04, stdev=81.54 00:13:35.251 clat (usec): min=68, max=12164, avg=1202.34, stdev=562.99 00:13:35.251 lat (usec): min=133, max=12169, avg=1222.38, stdev=558.42 00:13:35.251 clat percentiles (usec): 00:13:35.251 | 1.00th=[ 235], 5.00th=[ 404], 10.00th=[ 562], 20.00th=[ 742], 00:13:35.251 | 30.00th=[ 889], 40.00th=[ 1020], 50.00th=[ 1139], 60.00th=[ 1270], 00:13:35.251 | 70.00th=[ 1418], 80.00th=[ 1598], 90.00th=[ 1893], 95.00th=[ 2212], 00:13:35.251 | 99.00th=[ 2933], 99.50th=[ 3228], 99.90th=[ 4015], 99.95th=[ 4686], 00:13:35.251 | 99.99th=[ 7570] 00:13:35.251 bw ( KiB/s): min=145864, max=160280, per=100.00%, avg=149381.56, stdev=4817.67, samples=9 00:13:35.251 iops : min=36466, max=40070, avg=37345.33, stdev=1204.45, samples=9 00:13:35.251 lat (usec) : 100=0.01%, 250=1.27%, 500=6.55%, 750=12.63%, 1000=17.92% 00:13:35.251 lat (msec) : 2=53.83%, 4=7.68%, 10=0.10%, 20=0.01% 00:13:35.251 cpu : usr=39.51%, sys=51.47%, ctx=13, majf=0, minf=764 00:13:35.251 IO depths : 1=0.4%, 2=1.0%, 4=2.9%, 8=8.3%, 16=23.3%, 32=62.0%, >=64=2.1% 00:13:35.251 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.251 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.7%, >=64=0.0% 00:13:35.251 issued rwts: total=184276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.251 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:35.251 00:13:35.251 Run status group 0 (all jobs): 00:13:35.251 READ: bw=144MiB/s (151MB/s), 144MiB/s-144MiB/s (151MB/s-151MB/s), io=720MiB (755MB), run=5002-5002msec 00:13:35.251 ----------------------------------------------------- 00:13:35.251 Suppressions used: 00:13:35.251 count bytes template 00:13:35.251 1 11 /usr/src/fio/parse.c 00:13:35.251 1 8 libtcmalloc_minimal.so 00:13:35.251 1 904 libcrypto.so 00:13:35.251 ----------------------------------------------------- 00:13:35.251 00:13:35.251 16:40:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:35.251 16:40:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:35.251 16:40:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:35.251 16:40:20 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:35.251 16:40:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:35.251 16:40:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:35.251 16:40:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:35.251 16:40:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:35.251 16:40:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:35.251 16:40:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:35.251 16:40:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:35.251 16:40:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:35.251 16:40:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:35.251 16:40:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:35.251 16:40:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:35.251 16:40:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:35.251 { 00:13:35.251 "subsystems": [ 00:13:35.251 { 00:13:35.251 "subsystem": "bdev", 00:13:35.251 "config": [ 00:13:35.251 { 00:13:35.251 "params": { 00:13:35.251 "io_mechanism": "libaio", 00:13:35.251 "conserve_cpu": true, 00:13:35.251 "filename": "/dev/nvme0n1", 00:13:35.251 "name": "xnvme_bdev" 00:13:35.251 }, 00:13:35.251 "method": "bdev_xnvme_create" 00:13:35.251 }, 00:13:35.251 { 00:13:35.251 "method": "bdev_wait_for_examine" 00:13:35.251 } 00:13:35.251 ] 00:13:35.251 } 00:13:35.251 ] 00:13:35.251 } 00:13:35.251 16:40:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:35.251 16:40:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:35.251 16:40:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:35.251 16:40:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:35.251 16:40:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:35.531 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:35.531 fio-3.35 00:13:35.531 Starting 1 thread 00:13:42.257 00:13:42.257 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69795: Wed Nov 20 16:40:25 2024 00:13:42.257 write: IOPS=34.0k, BW=133MiB/s (139MB/s)(664MiB/5001msec); 0 zone resets 00:13:42.257 slat (usec): min=4, max=2026, avg=21.09, stdev=79.91 00:13:42.257 clat (usec): min=11, max=14380, avg=1310.53, stdev=1048.95 00:13:42.257 lat (usec): min=46, max=14384, avg=1331.62, stdev=1046.32 00:13:42.257 clat percentiles (usec): 00:13:42.257 | 1.00th=[ 231], 5.00th=[ 371], 10.00th=[ 515], 20.00th=[ 717], 00:13:42.257 | 30.00th=[ 873], 40.00th=[ 1012], 50.00th=[ 1156], 60.00th=[ 1303], 00:13:42.257 | 70.00th=[ 1467], 80.00th=[ 1680], 90.00th=[ 2024], 95.00th=[ 2442], 00:13:42.257 | 99.00th=[ 6915], 99.50th=[ 9110], 99.90th=[11469], 99.95th=[12256], 00:13:42.257 | 99.99th=[13173] 00:13:42.257 bw ( KiB/s): min=124984, max=159376, per=100.00%, avg=140119.33, stdev=10558.69, samples=9 00:13:42.257 iops : min=31246, max=39844, avg=35029.78, stdev=2639.69, samples=9 00:13:42.257 lat (usec) : 20=0.01%, 50=0.01%, 100=0.05%, 250=1.35%, 500=8.01% 00:13:42.257 lat (usec) : 750=12.70%, 1000=16.95% 00:13:42.257 lat (msec) : 2=50.56%, 4=8.90%, 10=1.16%, 20=0.31% 00:13:42.257 cpu : usr=41.24%, sys=48.54%, ctx=11, majf=0, minf=764 00:13:42.257 IO depths : 1=0.4%, 2=1.1%, 4=3.3%, 8=9.3%, 16=23.2%, 32=60.6%, >=64=2.3% 00:13:42.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.257 complete : 0=0.0%, 4=97.9%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:13:42.257 issued rwts: total=0,169974,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.257 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:42.257 00:13:42.257 Run status group 0 (all jobs): 00:13:42.257 WRITE: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=664MiB (696MB), run=5001-5001msec 00:13:42.257 ----------------------------------------------------- 00:13:42.257 Suppressions used: 00:13:42.257 count bytes template 00:13:42.257 1 11 /usr/src/fio/parse.c 00:13:42.257 1 8 libtcmalloc_minimal.so 00:13:42.257 1 904 libcrypto.so 00:13:42.257 ----------------------------------------------------- 00:13:42.257 00:13:42.257 00:13:42.257 real 0m13.667s 00:13:42.257 user 0m6.730s 00:13:42.257 sys 0m5.562s 00:13:42.257 16:40:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:42.257 16:40:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:42.257 ************************************ 00:13:42.257 END TEST xnvme_fio_plugin 00:13:42.257 ************************************ 00:13:42.257 16:40:26 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:42.257 16:40:26 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:42.257 16:40:26 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:42.257 16:40:26 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:42.257 16:40:26 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:42.257 16:40:26 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:42.257 16:40:26 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:42.257 16:40:26 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:42.257 16:40:26 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:42.257 16:40:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:42.257 16:40:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:42.257 16:40:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:42.257 ************************************ 00:13:42.257 START TEST xnvme_rpc 00:13:42.257 ************************************ 00:13:42.257 16:40:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:42.257 16:40:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:42.257 16:40:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:42.257 16:40:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:42.257 16:40:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:42.257 16:40:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69881 00:13:42.257 16:40:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69881 00:13:42.258 16:40:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69881 ']' 00:13:42.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.258 16:40:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.258 16:40:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:42.258 16:40:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.258 16:40:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:42.258 16:40:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.258 16:40:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:42.258 [2024-11-20 16:40:27.028696] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:13:42.258 [2024-11-20 16:40:27.028821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69881 ] 00:13:42.519 [2024-11-20 16:40:27.181351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.519 [2024-11-20 16:40:27.284593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.089 16:40:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:43.089 16:40:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:43.089 16:40:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:13:43.089 16:40:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.089 16:40:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.089 xnvme_bdev 00:13:43.089 16:40:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.089 16:40:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:43.089 16:40:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:43.089 16:40:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:43.089 16:40:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.089 16:40:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.089 16:40:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.089 16:40:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:43.089 16:40:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:43.089 16:40:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:43.089 16:40:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.089 16:40:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:43.089 16:40:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.089 16:40:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.089 16:40:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:43.089 16:40:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:43.349 16:40:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:43.349 16:40:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:43.349 16:40:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.349 16:40:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.349 16:40:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.349 16:40:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:43.349 16:40:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:43.349 16:40:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:43.349 16:40:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.349 16:40:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.349 16:40:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:43.349 16:40:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.349 16:40:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:43.349 16:40:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:43.349 16:40:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.349 16:40:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.349 16:40:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.349 16:40:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69881 00:13:43.349 16:40:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69881 ']' 00:13:43.349 16:40:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69881 00:13:43.349 16:40:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:43.350 16:40:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:43.350 16:40:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69881 00:13:43.350 16:40:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:43.350 killing process with pid 69881 00:13:43.350 16:40:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:43.350 16:40:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69881' 00:13:43.350 16:40:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69881 00:13:43.350 16:40:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69881 00:13:44.734 00:13:44.734 real 0m2.654s 00:13:44.734 user 0m2.758s 00:13:44.734 sys 0m0.380s 00:13:44.734 16:40:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:44.734 ************************************ 00:13:44.734 END TEST xnvme_rpc 00:13:44.734 ************************************ 00:13:44.734 16:40:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.996 16:40:29 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:44.996 16:40:29 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:44.996 16:40:29 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:44.996 16:40:29 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:44.996 ************************************ 00:13:44.996 START TEST xnvme_bdevperf 00:13:44.996 ************************************ 00:13:44.996 16:40:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:44.996 16:40:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:44.996 16:40:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:44.996 16:40:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:44.996 16:40:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:44.996 16:40:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:44.996 16:40:29 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:44.996 16:40:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:44.996 { 00:13:44.996 "subsystems": [ 00:13:44.996 { 00:13:44.996 "subsystem": "bdev", 00:13:44.996 "config": [ 00:13:44.996 { 00:13:44.996 "params": { 00:13:44.996 "io_mechanism": "io_uring", 00:13:44.996 "conserve_cpu": false, 00:13:44.996 "filename": "/dev/nvme0n1", 00:13:44.996 "name": "xnvme_bdev" 00:13:44.996 }, 00:13:44.996 "method": "bdev_xnvme_create" 00:13:44.996 }, 00:13:44.996 { 00:13:44.996 "method": "bdev_wait_for_examine" 00:13:44.996 } 00:13:44.996 ] 00:13:44.996 } 00:13:44.996 ] 00:13:44.996 } 00:13:44.996 [2024-11-20 16:40:29.727157] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:13:44.996 [2024-11-20 16:40:29.727286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69950 ] 00:13:45.256 [2024-11-20 16:40:29.886762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.256 [2024-11-20 16:40:29.988667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.518 Running I/O for 5 seconds... 00:13:47.402 33848.00 IOPS, 132.22 MiB/s [2024-11-20T16:40:35.577Z] 31043.00 IOPS, 121.26 MiB/s [2024-11-20T16:40:35.577Z] 32725.00 IOPS, 127.83 MiB/s [2024-11-20T16:40:35.577Z] 33435.75 IOPS, 130.61 MiB/s [2024-11-20T16:40:35.577Z] 33932.80 IOPS, 132.55 MiB/s 00:13:50.691 Latency(us) 00:13:50.691 [2024-11-20T16:40:35.577Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.691 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:50.691 xnvme_bdev : 5.00 33917.44 132.49 0.00 0.00 1882.03 66.17 166158.97 00:13:50.691 [2024-11-20T16:40:35.577Z] =================================================================================================================== 00:13:50.691 [2024-11-20T16:40:35.577Z] Total : 33917.44 132.49 0.00 0.00 1882.03 66.17 166158.97 00:13:51.258 16:40:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:51.258 16:40:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:51.258 16:40:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:51.258 16:40:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:51.258 16:40:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:51.258 { 00:13:51.258 "subsystems": [ 00:13:51.258 { 00:13:51.258 "subsystem": "bdev", 00:13:51.258 "config": [ 00:13:51.258 { 00:13:51.258 "params": { 00:13:51.258 "io_mechanism": "io_uring", 00:13:51.258 "conserve_cpu": false, 00:13:51.258 "filename": "/dev/nvme0n1", 00:13:51.258 "name": "xnvme_bdev" 00:13:51.258 }, 00:13:51.258 "method": "bdev_xnvme_create" 00:13:51.258 }, 00:13:51.258 { 00:13:51.258 "method": "bdev_wait_for_examine" 00:13:51.258 } 00:13:51.258 ] 00:13:51.258 } 00:13:51.258 ] 00:13:51.258 } 00:13:51.258 [2024-11-20 16:40:36.026510] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:13:51.258 [2024-11-20 16:40:36.026636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70025 ] 00:13:51.517 [2024-11-20 16:40:36.187289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.517 [2024-11-20 16:40:36.290750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.826 Running I/O for 5 seconds... 00:13:53.751 3847.00 IOPS, 15.03 MiB/s [2024-11-20T16:40:39.581Z] 3930.50 IOPS, 15.35 MiB/s [2024-11-20T16:40:40.569Z] 3933.67 IOPS, 15.37 MiB/s [2024-11-20T16:40:41.953Z] 4057.50 IOPS, 15.85 MiB/s [2024-11-20T16:40:41.953Z] 5800.80 IOPS, 22.66 MiB/s 00:13:57.067 Latency(us) 00:13:57.067 [2024-11-20T16:40:41.953Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.067 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:57.067 xnvme_bdev : 5.03 5777.51 22.57 0.00 0.00 11063.50 55.93 98404.82 00:13:57.067 [2024-11-20T16:40:41.953Z] =================================================================================================================== 00:13:57.067 [2024-11-20T16:40:41.953Z] Total : 5777.51 22.57 0.00 0.00 11063.50 55.93 98404.82 00:13:57.638 00:13:57.638 real 0m12.631s 00:13:57.638 user 0m5.891s 00:13:57.638 sys 0m6.485s 00:13:57.638 16:40:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.638 16:40:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:57.638 ************************************ 00:13:57.638 END TEST xnvme_bdevperf 00:13:57.638 ************************************ 00:13:57.638 16:40:42 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:57.638 16:40:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:57.638 16:40:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:57.638 16:40:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:57.638 ************************************ 00:13:57.638 START TEST xnvme_fio_plugin 00:13:57.638 ************************************ 00:13:57.638 16:40:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:57.638 16:40:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:57.638 16:40:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:57.638 16:40:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:57.638 16:40:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:57.638 16:40:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:57.638 16:40:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:57.638 16:40:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:57.638 16:40:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:57.638 16:40:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:57.638 16:40:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:57.638 16:40:42 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:57.638 16:40:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:57.638 16:40:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:57.638 16:40:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:57.638 16:40:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:57.638 16:40:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:57.638 16:40:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:57.638 16:40:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:57.638 16:40:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:57.638 16:40:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:57.638 16:40:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:57.638 16:40:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:57.638 16:40:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:57.638 { 00:13:57.638 "subsystems": [ 00:13:57.638 { 00:13:57.638 "subsystem": "bdev", 00:13:57.638 "config": [ 00:13:57.638 { 00:13:57.638 "params": { 00:13:57.638 "io_mechanism": "io_uring", 00:13:57.638 "conserve_cpu": false, 00:13:57.638 "filename": "/dev/nvme0n1", 00:13:57.638 "name": "xnvme_bdev" 00:13:57.638 }, 00:13:57.638 "method": "bdev_xnvme_create" 00:13:57.638 }, 00:13:57.638 { 00:13:57.638 "method": "bdev_wait_for_examine" 00:13:57.638 } 00:13:57.638 ] 00:13:57.638 } 00:13:57.638 ] 00:13:57.638 } 00:13:57.899 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:57.899 fio-3.35 00:13:57.899 Starting 1 thread 00:14:04.663 00:14:04.663 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70139: Wed Nov 20 16:40:48 2024 00:14:04.663 read: IOPS=38.5k, BW=150MiB/s (158MB/s)(752MiB/5004msec) 00:14:04.663 slat (nsec): min=2724, max=79813, avg=3976.60, stdev=2001.69 00:14:04.663 clat (usec): min=138, max=177859, avg=1501.33, stdev=817.50 00:14:04.663 lat (usec): min=142, max=177862, avg=1505.31, stdev=817.59 00:14:04.663 clat percentiles (usec): 00:14:04.663 | 1.00th=[ 881], 5.00th=[ 1020], 10.00th=[ 1090], 20.00th=[ 1188], 00:14:04.663 | 30.00th=[ 1287], 40.00th=[ 1369], 50.00th=[ 1467], 60.00th=[ 1549], 00:14:04.663 | 70.00th=[ 1647], 80.00th=[ 1778], 90.00th=[ 1942], 95.00th=[ 2073], 00:14:04.663 | 99.00th=[ 2442], 99.50th=[ 2606], 99.90th=[ 3720], 99.95th=[ 5211], 00:14:04.663 | 99.99th=[11469] 00:14:04.663 bw ( KiB/s): min=140280, max=174754, per=100.00%, avg=154516.67, stdev=11367.41, samples=9 00:14:04.663 iops : min=35070, max=43688, avg=38629.11, stdev=2841.74, samples=9 00:14:04.663 lat (usec) : 250=0.01%, 500=0.02%, 750=0.10%, 1000=3.95% 00:14:04.663 lat (msec) : 2=88.34%, 4=7.50%, 10=0.08%, 20=0.01%, 50=0.01% 00:14:04.663 lat (msec) : 100=0.01%, 250=0.01% 00:14:04.663 cpu : usr=32.78%, sys=66.06%, ctx=21, majf=0, minf=762 00:14:04.663 IO depths : 1=1.4%, 2=2.9%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.5%, >=64=1.6% 00:14:04.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.663 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:04.663 issued rwts: total=192478,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.663 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:04.664 00:14:04.664 Run status group 0 (all jobs): 00:14:04.664 READ: bw=150MiB/s (158MB/s), 150MiB/s-150MiB/s (158MB/s-158MB/s), io=752MiB (788MB), run=5004-5004msec 00:14:04.664 ----------------------------------------------------- 00:14:04.664 Suppressions used: 00:14:04.664 count bytes template 00:14:04.664 1 11 /usr/src/fio/parse.c 00:14:04.664 1 8 libtcmalloc_minimal.so 00:14:04.664 1 904 libcrypto.so 00:14:04.664 ----------------------------------------------------- 00:14:04.664 00:14:04.664 16:40:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:04.664 16:40:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:04.664 16:40:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:04.664 16:40:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:04.664 16:40:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:04.664 16:40:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:04.664 16:40:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:04.664 16:40:49 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:04.664 16:40:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:04.664 16:40:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:04.664 16:40:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:04.664 16:40:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:04.664 16:40:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:04.664 16:40:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:04.664 16:40:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:04.664 16:40:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:04.664 16:40:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:04.664 16:40:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:04.664 16:40:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:04.664 16:40:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:04.664 16:40:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:04.664 { 00:14:04.664 "subsystems": [ 00:14:04.664 { 00:14:04.664 "subsystem": "bdev", 00:14:04.664 "config": [ 00:14:04.664 { 00:14:04.664 "params": { 00:14:04.664 "io_mechanism": "io_uring", 00:14:04.664 "conserve_cpu": false, 00:14:04.664 "filename": "/dev/nvme0n1", 00:14:04.664 "name": "xnvme_bdev" 00:14:04.664 }, 00:14:04.664 "method": "bdev_xnvme_create" 00:14:04.664 }, 00:14:04.664 { 00:14:04.664 "method": "bdev_wait_for_examine" 00:14:04.664 } 00:14:04.664 ] 00:14:04.664 } 00:14:04.664 ] 00:14:04.664 } 00:14:04.664 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:04.664 fio-3.35 00:14:04.664 Starting 1 thread 00:14:11.297 00:14:11.297 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70231: Wed Nov 20 16:40:54 2024 00:14:11.297 write: IOPS=33.2k, BW=130MiB/s (136MB/s)(649MiB/5001msec); 0 zone resets 00:14:11.297 slat (nsec): min=2784, max=76074, avg=4677.50, stdev=2656.26 00:14:11.297 clat (usec): min=68, max=85708, avg=1742.76, stdev=2582.22 00:14:11.297 lat (usec): min=71, max=85712, avg=1747.44, stdev=2582.26 00:14:11.297 clat percentiles (usec): 00:14:11.297 | 1.00th=[ 922], 5.00th=[ 1090], 10.00th=[ 1172], 20.00th=[ 1303], 00:14:11.297 | 30.00th=[ 1401], 40.00th=[ 1500], 50.00th=[ 1582], 60.00th=[ 1663], 00:14:11.297 | 70.00th=[ 1745], 80.00th=[ 1860], 90.00th=[ 2024], 95.00th=[ 2212], 00:14:11.297 | 99.00th=[ 2704], 99.50th=[10683], 99.90th=[58459], 99.95th=[67634], 00:14:11.297 | 99.99th=[84411] 00:14:11.297 bw ( KiB/s): min=105712, max=153736, per=100.00%, avg=133978.11, stdev=15409.04, samples=9 00:14:11.297 iops : min=26428, max=38434, avg=33494.44, stdev=3852.29, samples=9 00:14:11.297 lat (usec) : 100=0.01%, 250=0.03%, 500=0.13%, 750=0.18%, 1000=1.84% 00:14:11.297 lat (msec) : 2=86.45%, 4=10.68%, 10=0.17%, 20=0.28%, 50=0.13% 00:14:11.297 lat (msec) : 100=0.10% 00:14:11.297 cpu : usr=35.30%, sys=63.60%, ctx=11, majf=0, minf=762 00:14:11.297 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.2%, 16=24.7%, 32=50.8%, >=64=1.7% 00:14:11.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:11.297 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:11.297 issued rwts: total=0,166233,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:11.297 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:11.297 00:14:11.297 Run status group 0 (all jobs): 00:14:11.297 WRITE: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=649MiB (681MB), run=5001-5001msec 00:14:11.297 ----------------------------------------------------- 00:14:11.297 Suppressions used: 00:14:11.297 count bytes template 00:14:11.297 1 11 /usr/src/fio/parse.c 00:14:11.297 1 8 libtcmalloc_minimal.so 00:14:11.297 1 904 libcrypto.so 00:14:11.297 ----------------------------------------------------- 00:14:11.297 00:14:11.297 ************************************ 00:14:11.297 END TEST xnvme_fio_plugin 00:14:11.297 ************************************ 00:14:11.297 00:14:11.297 real 0m13.560s 00:14:11.297 user 0m6.138s 00:14:11.297 sys 0m6.977s 00:14:11.297 16:40:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:11.297 16:40:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:11.297 16:40:55 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:11.297 16:40:55 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:11.297 16:40:55 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:11.297 16:40:55 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:11.297 16:40:55 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:11.297 16:40:55 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:11.297 16:40:55 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:11.297 ************************************ 00:14:11.297 START TEST xnvme_rpc 00:14:11.297 ************************************ 00:14:11.297 16:40:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:11.297 16:40:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:11.297 16:40:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:11.297 16:40:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:11.297 16:40:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:11.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.297 16:40:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70317 00:14:11.297 16:40:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70317 00:14:11.298 16:40:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70317 ']' 00:14:11.298 16:40:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.298 16:40:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:11.298 16:40:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.298 16:40:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:11.298 16:40:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.298 16:40:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:11.298 [2024-11-20 16:40:56.059705] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:14:11.298 [2024-11-20 16:40:56.059831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70317 ] 00:14:11.558 [2024-11-20 16:40:56.224107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.558 [2024-11-20 16:40:56.367834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.214 16:40:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:12.214 16:40:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:12.214 16:40:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:14:12.214 16:40:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.214 16:40:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.214 xnvme_bdev 00:14:12.214 16:40:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.214 16:40:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:12.214 16:40:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:12.214 16:40:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:12.214 16:40:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.214 16:40:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.214 16:40:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.214 16:40:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:12.214 16:40:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:12.214 16:40:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:12.214 16:40:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.214 16:40:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.214 16:40:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:12.476 16:40:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.476 16:40:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:12.476 16:40:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:12.476 16:40:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:12.476 16:40:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:12.476 16:40:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.476 16:40:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.476 16:40:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.476 16:40:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:14:12.476 16:40:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:12.476 16:40:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:12.476 16:40:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.476 16:40:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:12.476 16:40:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.476 16:40:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.476 16:40:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:12.476 16:40:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:12.476 16:40:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.476 16:40:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.476 16:40:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.476 16:40:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70317 00:14:12.476 16:40:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70317 ']' 00:14:12.476 16:40:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70317 00:14:12.476 16:40:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:12.476 16:40:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:12.476 16:40:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70317 00:14:12.476 16:40:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:12.476 killing process with pid 70317 00:14:12.476 16:40:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:12.476 16:40:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70317' 00:14:12.476 16:40:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70317 00:14:12.476 16:40:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70317 00:14:14.388 00:14:14.388 real 0m2.769s 00:14:14.389 user 0m2.878s 00:14:14.389 sys 0m0.378s 00:14:14.389 16:40:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:14.389 ************************************ 00:14:14.389 END TEST xnvme_rpc 00:14:14.389 ************************************ 00:14:14.389 16:40:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.389 16:40:58 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:14.389 16:40:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:14.389 16:40:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:14.389 16:40:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:14.389 ************************************ 00:14:14.389 START TEST xnvme_bdevperf 00:14:14.389 ************************************ 00:14:14.389 16:40:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:14.389 16:40:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:14.389 16:40:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:14:14.389 16:40:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:14.389 16:40:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:14.389 16:40:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:14.389 16:40:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:14.389 16:40:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:14.389 { 00:14:14.389 "subsystems": [ 00:14:14.389 { 00:14:14.389 "subsystem": "bdev", 00:14:14.389 "config": [ 00:14:14.389 { 00:14:14.389 "params": { 00:14:14.389 "io_mechanism": "io_uring", 00:14:14.389 "conserve_cpu": true, 00:14:14.389 "filename": "/dev/nvme0n1", 00:14:14.389 "name": "xnvme_bdev" 00:14:14.389 }, 00:14:14.389 "method": "bdev_xnvme_create" 00:14:14.389 }, 00:14:14.389 { 00:14:14.389 "method": "bdev_wait_for_examine" 00:14:14.389 } 00:14:14.389 ] 00:14:14.389 } 00:14:14.389 ] 00:14:14.389 } 00:14:14.389 [2024-11-20 16:40:58.870056] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:14:14.389 [2024-11-20 16:40:58.870316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70380 ] 00:14:14.389 [2024-11-20 16:40:59.031824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.389 [2024-11-20 16:40:59.136675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.649 Running I/O for 5 seconds... 00:14:16.533 36724.00 IOPS, 143.45 MiB/s [2024-11-20T16:41:02.409Z] 36116.00 IOPS, 141.08 MiB/s [2024-11-20T16:41:03.796Z] 35935.67 IOPS, 140.37 MiB/s [2024-11-20T16:41:04.735Z] 35429.25 IOPS, 138.40 MiB/s [2024-11-20T16:41:04.736Z] 35622.60 IOPS, 139.15 MiB/s 00:14:19.850 Latency(us) 00:14:19.850 [2024-11-20T16:41:04.736Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:19.850 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:19.850 xnvme_bdev : 5.00 35593.17 139.04 0.00 0.00 1793.02 75.22 110503.78 00:14:19.850 [2024-11-20T16:41:04.736Z] =================================================================================================================== 00:14:19.850 [2024-11-20T16:41:04.736Z] Total : 35593.17 139.04 0.00 0.00 1793.02 75.22 110503.78 00:14:20.419 16:41:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:20.419 16:41:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:20.419 16:41:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:20.419 16:41:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:20.419 16:41:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:20.419 { 00:14:20.419 "subsystems": [ 00:14:20.419 { 00:14:20.419 "subsystem": "bdev", 00:14:20.419 "config": [ 00:14:20.419 { 00:14:20.419 "params": { 00:14:20.419 "io_mechanism": "io_uring", 00:14:20.419 "conserve_cpu": true, 00:14:20.419 "filename": "/dev/nvme0n1", 00:14:20.419 "name": "xnvme_bdev" 00:14:20.419 }, 00:14:20.419 "method": "bdev_xnvme_create" 00:14:20.419 }, 00:14:20.419 { 00:14:20.419 "method": "bdev_wait_for_examine" 00:14:20.419 } 00:14:20.419 ] 00:14:20.419 } 00:14:20.419 ] 00:14:20.419 } 00:14:20.419 [2024-11-20 16:41:05.179031] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:14:20.419 [2024-11-20 16:41:05.179330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70461 ] 00:14:20.680 [2024-11-20 16:41:05.340568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.680 [2024-11-20 16:41:05.443496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.941 Running I/O for 5 seconds... 00:14:22.823 6010.00 IOPS, 23.48 MiB/s [2024-11-20T16:41:09.100Z] 7541.00 IOPS, 29.46 MiB/s [2024-11-20T16:41:10.046Z] 6150.00 IOPS, 24.02 MiB/s [2024-11-20T16:41:10.993Z] 6008.50 IOPS, 23.47 MiB/s [2024-11-20T16:41:10.993Z] 7727.40 IOPS, 30.19 MiB/s 00:14:26.107 Latency(us) 00:14:26.107 [2024-11-20T16:41:10.993Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.107 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:26.107 xnvme_bdev : 5.02 7714.53 30.13 0.00 0.00 8279.25 53.17 364581.81 00:14:26.107 [2024-11-20T16:41:10.993Z] =================================================================================================================== 00:14:26.107 [2024-11-20T16:41:10.993Z] Total : 7714.53 30.13 0.00 0.00 8279.25 53.17 364581.81 00:14:26.677 00:14:26.677 real 0m12.632s 00:14:26.677 user 0m9.564s 00:14:26.677 sys 0m2.236s 00:14:26.677 16:41:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:26.677 ************************************ 00:14:26.677 END TEST xnvme_bdevperf 00:14:26.677 ************************************ 00:14:26.677 16:41:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:26.677 16:41:11 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:26.677 16:41:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:26.677 16:41:11 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:26.677 16:41:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:26.677 ************************************ 00:14:26.677 START TEST xnvme_fio_plugin 00:14:26.677 ************************************ 00:14:26.677 16:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:26.677 16:41:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:26.677 16:41:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:14:26.677 16:41:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:26.677 16:41:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:26.677 16:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:26.677 16:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:26.677 16:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:26.677 16:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:26.677 16:41:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:26.677 16:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:26.677 16:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:26.677 16:41:11 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:26.677 16:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:26.677 16:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:26.677 16:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:26.677 16:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:26.677 16:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:26.677 16:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:26.677 16:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:26.677 16:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:26.677 16:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:26.677 16:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:26.677 16:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:26.677 { 00:14:26.677 "subsystems": [ 00:14:26.677 { 00:14:26.677 "subsystem": "bdev", 00:14:26.677 "config": [ 00:14:26.677 { 00:14:26.677 "params": { 00:14:26.677 "io_mechanism": "io_uring", 00:14:26.677 "conserve_cpu": true, 00:14:26.677 "filename": "/dev/nvme0n1", 00:14:26.677 "name": "xnvme_bdev" 00:14:26.677 }, 00:14:26.677 "method": "bdev_xnvme_create" 00:14:26.677 }, 00:14:26.677 { 00:14:26.677 "method": "bdev_wait_for_examine" 00:14:26.677 } 00:14:26.677 ] 00:14:26.677 } 00:14:26.677 ] 00:14:26.677 } 00:14:26.938 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:26.938 fio-3.35 00:14:26.938 Starting 1 thread 00:14:33.528 00:14:33.528 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70580: Wed Nov 20 16:41:17 2024 00:14:33.528 read: IOPS=38.5k, BW=150MiB/s (158MB/s)(752MiB/5001msec) 00:14:33.528 slat (nsec): min=2722, max=69786, avg=4352.98, stdev=2470.85 00:14:33.528 clat (usec): min=616, max=3689, avg=1485.77, stdev=340.66 00:14:33.528 lat (usec): min=619, max=3694, avg=1490.12, stdev=341.45 00:14:33.528 clat percentiles (usec): 00:14:33.528 | 1.00th=[ 930], 5.00th=[ 1020], 10.00th=[ 1090], 20.00th=[ 1188], 00:14:33.528 | 30.00th=[ 1270], 40.00th=[ 1352], 50.00th=[ 1450], 60.00th=[ 1532], 00:14:33.528 | 70.00th=[ 1631], 80.00th=[ 1762], 90.00th=[ 1926], 95.00th=[ 2089], 00:14:33.528 | 99.00th=[ 2474], 99.50th=[ 2638], 99.90th=[ 3163], 99.95th=[ 3425], 00:14:33.528 | 99.99th=[ 3621] 00:14:33.528 bw ( KiB/s): min=140288, max=167936, per=99.79%, avg=153620.22, stdev=9528.72, samples=9 00:14:33.528 iops : min=35072, max=41984, avg=38405.00, stdev=2382.13, samples=9 00:14:33.528 lat (usec) : 750=0.01%, 1000=3.68% 00:14:33.528 lat (msec) : 2=88.94%, 4=7.38% 00:14:33.528 cpu : usr=54.30%, sys=41.80%, ctx=17, majf=0, minf=762 00:14:33.528 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:33.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:33.528 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:33.528 issued rwts: total=192468,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:33.528 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:33.528 00:14:33.528 Run status group 0 (all jobs): 00:14:33.528 READ: bw=150MiB/s (158MB/s), 150MiB/s-150MiB/s (158MB/s-158MB/s), io=752MiB (788MB), run=5001-5001msec 00:14:33.528 ----------------------------------------------------- 00:14:33.528 Suppressions used: 00:14:33.528 count bytes template 00:14:33.528 1 11 /usr/src/fio/parse.c 00:14:33.528 1 8 libtcmalloc_minimal.so 00:14:33.528 1 904 libcrypto.so 00:14:33.528 ----------------------------------------------------- 00:14:33.528 00:14:33.528 16:41:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:33.528 16:41:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:33.528 16:41:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:33.528 16:41:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:33.528 16:41:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:33.528 16:41:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:33.528 16:41:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:33.528 16:41:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:33.528 16:41:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:33.528 16:41:18 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:33.528 16:41:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:33.528 16:41:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:33.528 16:41:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:33.528 16:41:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:33.528 16:41:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:33.528 16:41:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:33.528 16:41:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:33.528 16:41:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:33.528 16:41:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:33.528 16:41:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:33.528 16:41:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:33.528 { 00:14:33.528 "subsystems": [ 00:14:33.528 { 00:14:33.528 "subsystem": "bdev", 00:14:33.528 "config": [ 00:14:33.528 { 00:14:33.528 "params": { 00:14:33.528 "io_mechanism": "io_uring", 00:14:33.528 "conserve_cpu": true, 00:14:33.528 "filename": "/dev/nvme0n1", 00:14:33.528 "name": "xnvme_bdev" 00:14:33.528 }, 00:14:33.528 "method": "bdev_xnvme_create" 00:14:33.528 }, 00:14:33.528 { 00:14:33.528 "method": "bdev_wait_for_examine" 00:14:33.528 } 00:14:33.528 ] 00:14:33.528 } 00:14:33.528 ] 00:14:33.528 } 00:14:33.788 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:33.788 fio-3.35 00:14:33.788 Starting 1 thread 00:14:40.375 00:14:40.375 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70666: Wed Nov 20 16:41:24 2024 00:14:40.375 write: IOPS=36.3k, BW=142MiB/s (149MB/s)(710MiB/5003msec); 0 zone resets 00:14:40.375 slat (usec): min=2, max=523, avg= 4.02, stdev= 3.05 00:14:40.375 clat (usec): min=48, max=113470, avg=1610.29, stdev=2978.63 00:14:40.375 lat (usec): min=51, max=113473, avg=1614.31, stdev=2978.72 00:14:40.375 clat percentiles (usec): 00:14:40.375 | 1.00th=[ 668], 5.00th=[ 906], 10.00th=[ 996], 20.00th=[ 1123], 00:14:40.375 | 30.00th=[ 1205], 40.00th=[ 1287], 50.00th=[ 1352], 60.00th=[ 1418], 00:14:40.375 | 70.00th=[ 1500], 80.00th=[ 1631], 90.00th=[ 1876], 95.00th=[ 2180], 00:14:40.375 | 99.00th=[ 7832], 99.50th=[ 8717], 99.90th=[ 11469], 99.95th=[101188], 00:14:40.375 | 99.99th=[112722] 00:14:40.375 bw ( KiB/s): min=85536, max=182808, per=100.00%, avg=155430.56, stdev=29531.86, samples=9 00:14:40.375 iops : min=21384, max=45702, avg=38857.56, stdev=7383.00, samples=9 00:14:40.375 lat (usec) : 50=0.01%, 100=0.01%, 250=0.13%, 500=0.40%, 750=1.01% 00:14:40.375 lat (usec) : 1000=8.54% 00:14:40.375 lat (msec) : 2=82.87%, 4=4.01%, 10=2.87%, 20=0.08%, 50=0.01% 00:14:40.375 lat (msec) : 100=0.01%, 250=0.06% 00:14:40.375 cpu : usr=70.25%, sys=25.61%, ctx=11, majf=0, minf=762 00:14:40.375 IO depths : 1=1.4%, 2=2.9%, 4=5.8%, 8=11.6%, 16=23.4%, 32=52.8%, >=64=2.1% 00:14:40.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.375 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.2%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:40.375 issued rwts: total=0,181808,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:40.375 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:40.375 00:14:40.375 Run status group 0 (all jobs): 00:14:40.375 WRITE: bw=142MiB/s (149MB/s), 142MiB/s-142MiB/s (149MB/s-149MB/s), io=710MiB (745MB), run=5003-5003msec 00:14:40.375 ----------------------------------------------------- 00:14:40.375 Suppressions used: 00:14:40.375 count bytes template 00:14:40.375 1 11 /usr/src/fio/parse.c 00:14:40.375 1 8 libtcmalloc_minimal.so 00:14:40.375 1 904 libcrypto.so 00:14:40.375 ----------------------------------------------------- 00:14:40.375 00:14:40.375 00:14:40.375 real 0m13.617s 00:14:40.375 user 0m9.010s 00:14:40.375 sys 0m3.878s 00:14:40.375 16:41:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:40.375 ************************************ 00:14:40.375 END TEST xnvme_fio_plugin 00:14:40.375 ************************************ 00:14:40.375 16:41:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:40.375 16:41:25 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:40.375 16:41:25 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:14:40.375 16:41:25 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:14:40.375 16:41:25 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:14:40.375 16:41:25 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:40.375 16:41:25 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:40.375 16:41:25 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:40.375 16:41:25 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:40.375 16:41:25 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:40.375 16:41:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:40.375 16:41:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:40.375 16:41:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:40.375 ************************************ 00:14:40.375 START TEST xnvme_rpc 00:14:40.375 ************************************ 00:14:40.375 16:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:40.375 16:41:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:40.375 16:41:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:40.375 16:41:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:40.375 16:41:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:40.375 16:41:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70747 00:14:40.375 16:41:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70747 00:14:40.375 16:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70747 ']' 00:14:40.375 16:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.375 16:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:40.375 16:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.375 16:41:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:40.375 16:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:40.375 16:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:40.636 [2024-11-20 16:41:25.269863] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:14:40.636 [2024-11-20 16:41:25.270139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70747 ] 00:14:40.636 [2024-11-20 16:41:25.429114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.897 [2024-11-20 16:41:25.535198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.469 xnvme_bdev 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70747 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70747 ']' 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70747 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:41.469 16:41:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:41.470 16:41:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70747 00:14:41.470 killing process with pid 70747 00:14:41.470 16:41:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:41.470 16:41:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:41.470 16:41:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70747' 00:14:41.470 16:41:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70747 00:14:41.470 16:41:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70747 00:14:43.384 ************************************ 00:14:43.384 END TEST xnvme_rpc 00:14:43.384 ************************************ 00:14:43.384 00:14:43.384 real 0m2.652s 00:14:43.384 user 0m2.740s 00:14:43.384 sys 0m0.370s 00:14:43.384 16:41:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:43.384 16:41:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.384 16:41:27 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:43.384 16:41:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:43.384 16:41:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:43.384 16:41:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:43.384 ************************************ 00:14:43.384 START TEST xnvme_bdevperf 00:14:43.384 ************************************ 00:14:43.384 16:41:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:43.384 16:41:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:43.384 16:41:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:43.384 16:41:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:43.384 16:41:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:43.384 16:41:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:43.384 16:41:27 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:43.384 16:41:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:43.384 { 00:14:43.384 "subsystems": [ 00:14:43.384 { 00:14:43.384 "subsystem": "bdev", 00:14:43.384 "config": [ 00:14:43.384 { 00:14:43.384 "params": { 00:14:43.384 "io_mechanism": "io_uring_cmd", 00:14:43.384 "conserve_cpu": false, 00:14:43.384 "filename": "/dev/ng0n1", 00:14:43.384 "name": "xnvme_bdev" 00:14:43.384 }, 00:14:43.384 "method": "bdev_xnvme_create" 00:14:43.384 }, 00:14:43.384 { 00:14:43.384 "method": "bdev_wait_for_examine" 00:14:43.384 } 00:14:43.384 ] 00:14:43.384 } 00:14:43.385 ] 00:14:43.385 } 00:14:43.385 [2024-11-20 16:41:27.970652] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:14:43.385 [2024-11-20 16:41:27.970777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70821 ] 00:14:43.385 [2024-11-20 16:41:28.132650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.385 [2024-11-20 16:41:28.238945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.645 Running I/O for 5 seconds... 00:14:45.971 39019.00 IOPS, 152.42 MiB/s [2024-11-20T16:41:31.805Z] 38197.50 IOPS, 149.21 MiB/s [2024-11-20T16:41:32.746Z] 38273.00 IOPS, 149.50 MiB/s [2024-11-20T16:41:33.687Z] 38768.75 IOPS, 151.44 MiB/s [2024-11-20T16:41:33.687Z] 39597.00 IOPS, 154.68 MiB/s 00:14:48.801 Latency(us) 00:14:48.801 [2024-11-20T16:41:33.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.801 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:48.801 xnvme_bdev : 5.01 39555.91 154.52 0.00 0.00 1614.04 155.18 45572.73 00:14:48.801 [2024-11-20T16:41:33.687Z] =================================================================================================================== 00:14:48.801 [2024-11-20T16:41:33.687Z] Total : 39555.91 154.52 0.00 0.00 1614.04 155.18 45572.73 00:14:49.371 16:41:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:49.371 16:41:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:49.371 16:41:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:49.371 16:41:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:49.371 16:41:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:49.632 { 00:14:49.632 "subsystems": [ 00:14:49.632 { 00:14:49.632 "subsystem": "bdev", 00:14:49.632 "config": [ 00:14:49.632 { 00:14:49.632 "params": { 00:14:49.632 "io_mechanism": "io_uring_cmd", 00:14:49.632 "conserve_cpu": false, 00:14:49.632 "filename": "/dev/ng0n1", 00:14:49.632 "name": "xnvme_bdev" 00:14:49.632 }, 00:14:49.632 "method": "bdev_xnvme_create" 00:14:49.632 }, 00:14:49.632 { 00:14:49.632 "method": "bdev_wait_for_examine" 00:14:49.632 } 00:14:49.632 ] 00:14:49.632 } 00:14:49.632 ] 00:14:49.632 } 00:14:49.633 [2024-11-20 16:41:34.307481] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:14:49.633 [2024-11-20 16:41:34.307604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70895 ] 00:14:49.633 [2024-11-20 16:41:34.468904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.895 [2024-11-20 16:41:34.572992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.155 Running I/O for 5 seconds... 00:14:52.073 18510.00 IOPS, 72.30 MiB/s [2024-11-20T16:41:37.901Z] 16006.50 IOPS, 62.53 MiB/s [2024-11-20T16:41:38.840Z] 15552.00 IOPS, 60.75 MiB/s [2024-11-20T16:41:40.229Z] 16828.50 IOPS, 65.74 MiB/s [2024-11-20T16:41:40.229Z] 16431.20 IOPS, 64.18 MiB/s 00:14:55.343 Latency(us) 00:14:55.343 [2024-11-20T16:41:40.229Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.343 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:55.343 xnvme_bdev : 5.20 15815.08 61.78 0.00 0.00 4036.51 46.28 253271.43 00:14:55.343 [2024-11-20T16:41:40.229Z] =================================================================================================================== 00:14:55.343 [2024-11-20T16:41:40.229Z] Total : 15815.08 61.78 0.00 0.00 4036.51 46.28 253271.43 00:14:55.912 16:41:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:55.912 16:41:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:55.912 16:41:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:55.912 16:41:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:55.912 16:41:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:55.912 { 00:14:55.912 "subsystems": [ 00:14:55.912 { 00:14:55.912 "subsystem": "bdev", 00:14:55.912 "config": [ 00:14:55.912 { 00:14:55.912 "params": { 00:14:55.912 "io_mechanism": "io_uring_cmd", 00:14:55.912 "conserve_cpu": false, 00:14:55.912 "filename": "/dev/ng0n1", 00:14:55.912 "name": "xnvme_bdev" 00:14:55.913 }, 00:14:55.913 "method": "bdev_xnvme_create" 00:14:55.913 }, 00:14:55.913 { 00:14:55.913 "method": "bdev_wait_for_examine" 00:14:55.913 } 00:14:55.913 ] 00:14:55.913 } 00:14:55.913 ] 00:14:55.913 } 00:14:56.174 [2024-11-20 16:41:40.812148] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:14:56.174 [2024-11-20 16:41:40.812286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70968 ] 00:14:56.174 [2024-11-20 16:41:40.975189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.436 [2024-11-20 16:41:41.078513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.696 Running I/O for 5 seconds... 00:14:58.663 59840.00 IOPS, 233.75 MiB/s [2024-11-20T16:41:44.488Z] 58816.00 IOPS, 229.75 MiB/s [2024-11-20T16:41:45.428Z] 60693.33 IOPS, 237.08 MiB/s [2024-11-20T16:41:46.369Z] 63040.00 IOPS, 246.25 MiB/s [2024-11-20T16:41:46.369Z] 63296.00 IOPS, 247.25 MiB/s 00:15:01.483 Latency(us) 00:15:01.483 [2024-11-20T16:41:46.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.483 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:01.483 xnvme_bdev : 5.00 63280.70 247.19 0.00 0.00 1007.74 482.07 2848.30 00:15:01.483 [2024-11-20T16:41:46.369Z] =================================================================================================================== 00:15:01.483 [2024-11-20T16:41:46.369Z] Total : 63280.70 247.19 0.00 0.00 1007.74 482.07 2848.30 00:15:02.425 16:41:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:02.425 16:41:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:02.425 16:41:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:02.425 16:41:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:02.425 16:41:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:02.425 { 00:15:02.425 "subsystems": [ 00:15:02.425 { 00:15:02.425 "subsystem": "bdev", 00:15:02.425 "config": [ 00:15:02.425 { 00:15:02.425 "params": { 00:15:02.425 "io_mechanism": "io_uring_cmd", 00:15:02.425 "conserve_cpu": false, 00:15:02.425 "filename": "/dev/ng0n1", 00:15:02.425 "name": "xnvme_bdev" 00:15:02.425 }, 00:15:02.425 "method": "bdev_xnvme_create" 00:15:02.425 }, 00:15:02.425 { 00:15:02.425 "method": "bdev_wait_for_examine" 00:15:02.425 } 00:15:02.425 ] 00:15:02.425 } 00:15:02.425 ] 00:15:02.425 } 00:15:02.425 [2024-11-20 16:41:47.109625] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:15:02.425 [2024-11-20 16:41:47.109755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71044 ] 00:15:02.425 [2024-11-20 16:41:47.269456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.684 [2024-11-20 16:41:47.376328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.945 Running I/O for 5 seconds... 00:15:04.833 1909.00 IOPS, 7.46 MiB/s [2024-11-20T16:41:50.661Z] 1332.50 IOPS, 5.21 MiB/s [2024-11-20T16:41:52.046Z] 3329.33 IOPS, 13.01 MiB/s [2024-11-20T16:41:52.989Z] 2579.75 IOPS, 10.08 MiB/s [2024-11-20T16:41:52.989Z] 2135.00 IOPS, 8.34 MiB/s 00:15:08.103 Latency(us) 00:15:08.103 [2024-11-20T16:41:52.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.103 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:08.103 xnvme_bdev : 5.15 2086.49 8.15 0.00 0.00 30141.41 47.46 372647.78 00:15:08.103 [2024-11-20T16:41:52.989Z] =================================================================================================================== 00:15:08.103 [2024-11-20T16:41:52.989Z] Total : 2086.49 8.15 0.00 0.00 30141.41 47.46 372647.78 00:15:08.675 ************************************ 00:15:08.675 END TEST xnvme_bdevperf 00:15:08.675 ************************************ 00:15:08.675 00:15:08.675 real 0m25.609s 00:15:08.675 user 0m14.648s 00:15:08.675 sys 0m10.484s 00:15:08.675 16:41:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:08.675 16:41:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:08.675 16:41:53 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:08.675 16:41:53 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:08.675 16:41:53 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:08.675 16:41:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:08.936 ************************************ 00:15:08.936 START TEST xnvme_fio_plugin 00:15:08.936 ************************************ 00:15:08.936 16:41:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:08.936 16:41:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:08.936 16:41:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:08.936 16:41:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:08.936 16:41:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:08.936 16:41:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:08.936 16:41:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:08.936 16:41:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:08.936 16:41:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:08.936 16:41:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:08.936 16:41:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:08.936 16:41:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:08.936 16:41:53 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:08.936 16:41:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:08.936 16:41:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:08.936 16:41:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:08.936 16:41:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:08.936 16:41:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:08.936 16:41:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:08.936 16:41:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:08.936 16:41:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:08.936 16:41:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:08.936 16:41:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:08.936 16:41:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:08.936 { 00:15:08.936 "subsystems": [ 00:15:08.936 { 00:15:08.936 "subsystem": "bdev", 00:15:08.936 "config": [ 00:15:08.936 { 00:15:08.936 "params": { 00:15:08.936 "io_mechanism": "io_uring_cmd", 00:15:08.936 "conserve_cpu": false, 00:15:08.936 "filename": "/dev/ng0n1", 00:15:08.936 "name": "xnvme_bdev" 00:15:08.936 }, 00:15:08.936 "method": "bdev_xnvme_create" 00:15:08.936 }, 00:15:08.936 { 00:15:08.936 "method": "bdev_wait_for_examine" 00:15:08.936 } 00:15:08.936 ] 00:15:08.936 } 00:15:08.936 ] 00:15:08.936 } 00:15:08.936 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:08.936 fio-3.35 00:15:08.936 Starting 1 thread 00:15:15.546 00:15:15.546 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71162: Wed Nov 20 16:41:59 2024 00:15:15.546 read: IOPS=39.7k, BW=155MiB/s (163MB/s)(776MiB/5004msec) 00:15:15.546 slat (nsec): min=2727, max=84377, avg=3819.44, stdev=1970.85 00:15:15.546 clat (usec): min=404, max=9618, avg=1454.35, stdev=335.65 00:15:15.546 lat (usec): min=407, max=9621, avg=1458.17, stdev=335.88 00:15:15.546 clat percentiles (usec): 00:15:15.546 | 1.00th=[ 865], 5.00th=[ 971], 10.00th=[ 1037], 20.00th=[ 1156], 00:15:15.546 | 30.00th=[ 1254], 40.00th=[ 1352], 50.00th=[ 1434], 60.00th=[ 1532], 00:15:15.546 | 70.00th=[ 1614], 80.00th=[ 1713], 90.00th=[ 1876], 95.00th=[ 2024], 00:15:15.546 | 99.00th=[ 2343], 99.50th=[ 2540], 99.90th=[ 3032], 99.95th=[ 3425], 00:15:15.546 | 99.99th=[ 4178] 00:15:15.546 bw ( KiB/s): min=146944, max=171682, per=100.00%, avg=159306.89, stdev=8393.71, samples=9 00:15:15.546 iops : min=36736, max=42920, avg=39826.67, stdev=2098.33, samples=9 00:15:15.546 lat (usec) : 500=0.01%, 750=0.04%, 1000=6.89% 00:15:15.546 lat (msec) : 2=87.55%, 4=5.51%, 10=0.01% 00:15:15.546 cpu : usr=38.54%, sys=60.26%, ctx=14, majf=0, minf=762 00:15:15.546 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.2%, >=64=1.6% 00:15:15.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:15.546 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:15.546 issued rwts: total=198698,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:15.546 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:15.546 00:15:15.546 Run status group 0 (all jobs): 00:15:15.546 READ: bw=155MiB/s (163MB/s), 155MiB/s-155MiB/s (163MB/s-163MB/s), io=776MiB (814MB), run=5004-5004msec 00:15:15.546 ----------------------------------------------------- 00:15:15.546 Suppressions used: 00:15:15.546 count bytes template 00:15:15.546 1 11 /usr/src/fio/parse.c 00:15:15.546 1 8 libtcmalloc_minimal.so 00:15:15.546 1 904 libcrypto.so 00:15:15.546 ----------------------------------------------------- 00:15:15.546 00:15:15.546 16:42:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:15.546 16:42:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:15.546 16:42:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:15.546 16:42:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:15.546 16:42:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:15.546 16:42:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:15.546 16:42:00 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:15.546 16:42:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:15.546 16:42:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:15.546 16:42:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:15.546 16:42:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:15.546 16:42:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:15.546 16:42:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:15.546 16:42:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:15.547 16:42:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:15.547 16:42:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:15.547 16:42:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:15.547 16:42:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:15.547 16:42:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:15.547 16:42:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:15.547 16:42:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:15.547 { 00:15:15.547 "subsystems": [ 00:15:15.547 { 00:15:15.547 "subsystem": "bdev", 00:15:15.547 "config": [ 00:15:15.547 { 00:15:15.547 "params": { 00:15:15.547 "io_mechanism": "io_uring_cmd", 00:15:15.547 "conserve_cpu": false, 00:15:15.547 "filename": "/dev/ng0n1", 00:15:15.547 "name": "xnvme_bdev" 00:15:15.547 }, 00:15:15.547 "method": "bdev_xnvme_create" 00:15:15.547 }, 00:15:15.547 { 00:15:15.547 "method": "bdev_wait_for_examine" 00:15:15.547 } 00:15:15.547 ] 00:15:15.547 } 00:15:15.547 ] 00:15:15.547 } 00:15:15.808 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:15.808 fio-3.35 00:15:15.808 Starting 1 thread 00:15:22.391 00:15:22.391 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71251: Wed Nov 20 16:42:06 2024 00:15:22.391 write: IOPS=31.0k, BW=121MiB/s (127MB/s)(605MiB/5002msec); 0 zone resets 00:15:22.391 slat (nsec): min=2807, max=77940, avg=3856.61, stdev=1982.88 00:15:22.391 clat (usec): min=54, max=389589, avg=1935.74, stdev=9451.34 00:15:22.391 lat (usec): min=58, max=389593, avg=1939.60, stdev=9451.34 00:15:22.391 clat percentiles (usec): 00:15:22.391 | 1.00th=[ 457], 5.00th=[ 791], 10.00th=[ 955], 20.00th=[ 1139], 00:15:22.391 | 30.00th=[ 1270], 40.00th=[ 1369], 50.00th=[ 1467], 60.00th=[ 1565], 00:15:22.391 | 70.00th=[ 1680], 80.00th=[ 1827], 90.00th=[ 2343], 95.00th=[ 3163], 00:15:22.391 | 99.00th=[ 5145], 99.50th=[ 5866], 99.90th=[135267], 99.95th=[208667], 00:15:22.391 | 99.99th=[387974] 00:15:22.391 bw ( KiB/s): min= 9568, max=155832, per=97.78%, avg=121163.56, stdev=53397.69, samples=9 00:15:22.391 iops : min= 2392, max=38958, avg=30290.89, stdev=13349.42, samples=9 00:15:22.391 lat (usec) : 100=0.03%, 250=0.18%, 500=1.08%, 750=2.73%, 1000=8.04% 00:15:22.391 lat (msec) : 2=73.15%, 4=12.30%, 10=2.32%, 20=0.01%, 100=0.04% 00:15:22.391 lat (msec) : 250=0.08%, 500=0.04% 00:15:22.391 cpu : usr=36.73%, sys=62.21%, ctx=12, majf=0, minf=762 00:15:22.391 IO depths : 1=0.9%, 2=1.8%, 4=4.0%, 8=8.9%, 16=21.4%, 32=60.4%, >=64=2.6% 00:15:22.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.391 complete : 0=0.0%, 4=97.9%, 8=0.1%, 16=0.2%, 32=0.4%, 64=1.5%, >=64=0.0% 00:15:22.391 issued rwts: total=0,154947,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:22.391 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:22.391 00:15:22.391 Run status group 0 (all jobs): 00:15:22.391 WRITE: bw=121MiB/s (127MB/s), 121MiB/s-121MiB/s (127MB/s-127MB/s), io=605MiB (635MB), run=5002-5002msec 00:15:22.391 ----------------------------------------------------- 00:15:22.391 Suppressions used: 00:15:22.391 count bytes template 00:15:22.391 1 11 /usr/src/fio/parse.c 00:15:22.391 1 8 libtcmalloc_minimal.so 00:15:22.391 1 904 libcrypto.so 00:15:22.391 ----------------------------------------------------- 00:15:22.391 00:15:22.391 00:15:22.391 real 0m13.524s 00:15:22.391 user 0m6.430s 00:15:22.391 sys 0m6.655s 00:15:22.391 16:42:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:22.391 16:42:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:22.391 ************************************ 00:15:22.391 END TEST xnvme_fio_plugin 00:15:22.391 ************************************ 00:15:22.391 16:42:07 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:22.391 16:42:07 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:15:22.391 16:42:07 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:15:22.391 16:42:07 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:22.391 16:42:07 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:22.391 16:42:07 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:22.391 16:42:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:22.391 ************************************ 00:15:22.391 START TEST xnvme_rpc 00:15:22.391 ************************************ 00:15:22.391 16:42:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:22.391 16:42:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:22.391 16:42:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:22.391 16:42:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:22.391 16:42:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:22.391 16:42:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71331 00:15:22.391 16:42:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71331 00:15:22.391 16:42:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71331 ']' 00:15:22.391 16:42:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:22.391 16:42:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.391 16:42:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:22.391 16:42:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.391 16:42:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:22.391 16:42:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.391 [2024-11-20 16:42:07.237818] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:15:22.391 [2024-11-20 16:42:07.238063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71331 ] 00:15:22.651 [2024-11-20 16:42:07.399074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.651 [2024-11-20 16:42:07.501598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:23.593 xnvme_bdev 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71331 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71331 ']' 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71331 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71331 00:15:23.593 killing process with pid 71331 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71331' 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71331 00:15:23.593 16:42:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71331 00:15:25.050 00:15:25.050 real 0m2.683s 00:15:25.050 user 0m2.830s 00:15:25.050 sys 0m0.370s 00:15:25.050 ************************************ 00:15:25.050 END TEST xnvme_rpc 00:15:25.050 ************************************ 00:15:25.050 16:42:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:25.050 16:42:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.050 16:42:09 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:25.050 16:42:09 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:25.050 16:42:09 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:25.050 16:42:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:25.050 ************************************ 00:15:25.050 START TEST xnvme_bdevperf 00:15:25.050 ************************************ 00:15:25.050 16:42:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:25.050 16:42:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:25.050 16:42:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:15:25.050 16:42:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:25.050 16:42:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:25.050 16:42:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:25.050 16:42:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:25.050 16:42:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:25.050 { 00:15:25.050 "subsystems": [ 00:15:25.050 { 00:15:25.050 "subsystem": "bdev", 00:15:25.050 "config": [ 00:15:25.050 { 00:15:25.050 "params": { 00:15:25.050 "io_mechanism": "io_uring_cmd", 00:15:25.050 "conserve_cpu": true, 00:15:25.050 "filename": "/dev/ng0n1", 00:15:25.050 "name": "xnvme_bdev" 00:15:25.050 }, 00:15:25.050 "method": "bdev_xnvme_create" 00:15:25.050 }, 00:15:25.050 { 00:15:25.050 "method": "bdev_wait_for_examine" 00:15:25.050 } 00:15:25.050 ] 00:15:25.050 } 00:15:25.050 ] 00:15:25.050 } 00:15:25.309 [2024-11-20 16:42:09.989850] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:15:25.309 [2024-11-20 16:42:09.990058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71405 ] 00:15:25.309 [2024-11-20 16:42:10.168272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.570 [2024-11-20 16:42:10.271503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.831 Running I/O for 5 seconds... 00:15:27.731 36892.00 IOPS, 144.11 MiB/s [2024-11-20T16:42:13.558Z] 38545.00 IOPS, 150.57 MiB/s [2024-11-20T16:42:14.935Z] 39521.33 IOPS, 154.38 MiB/s [2024-11-20T16:42:15.898Z] 40539.50 IOPS, 158.36 MiB/s [2024-11-20T16:42:15.898Z] 40731.60 IOPS, 159.11 MiB/s 00:15:31.012 Latency(us) 00:15:31.012 [2024-11-20T16:42:15.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.012 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:31.012 xnvme_bdev : 5.00 40712.55 159.03 0.00 0.00 1567.94 143.36 19660.80 00:15:31.012 [2024-11-20T16:42:15.898Z] =================================================================================================================== 00:15:31.012 [2024-11-20T16:42:15.898Z] Total : 40712.55 159.03 0.00 0.00 1567.94 143.36 19660.80 00:15:31.583 16:42:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:31.583 16:42:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:31.583 16:42:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:31.583 16:42:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:31.583 16:42:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:31.583 { 00:15:31.583 "subsystems": [ 00:15:31.583 { 00:15:31.583 "subsystem": "bdev", 00:15:31.583 "config": [ 00:15:31.583 { 00:15:31.583 "params": { 00:15:31.583 "io_mechanism": "io_uring_cmd", 00:15:31.583 "conserve_cpu": true, 00:15:31.583 "filename": "/dev/ng0n1", 00:15:31.583 "name": "xnvme_bdev" 00:15:31.583 }, 00:15:31.583 "method": "bdev_xnvme_create" 00:15:31.583 }, 00:15:31.583 { 00:15:31.583 "method": "bdev_wait_for_examine" 00:15:31.583 } 00:15:31.583 ] 00:15:31.583 } 00:15:31.583 ] 00:15:31.583 } 00:15:31.583 [2024-11-20 16:42:16.340770] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:15:31.583 [2024-11-20 16:42:16.340958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71479 ] 00:15:31.843 [2024-11-20 16:42:16.518803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.843 [2024-11-20 16:42:16.620606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.103 Running I/O for 5 seconds... 00:15:33.996 16743.00 IOPS, 65.40 MiB/s [2024-11-20T16:42:20.263Z] 17966.00 IOPS, 70.18 MiB/s [2024-11-20T16:42:21.203Z] 18360.67 IOPS, 71.72 MiB/s [2024-11-20T16:42:22.144Z] 18836.25 IOPS, 73.58 MiB/s [2024-11-20T16:42:22.144Z] 19463.60 IOPS, 76.03 MiB/s 00:15:37.258 Latency(us) 00:15:37.258 [2024-11-20T16:42:22.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:37.258 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:37.258 xnvme_bdev : 5.00 19457.17 76.00 0.00 0.00 3283.86 50.41 21475.64 00:15:37.258 [2024-11-20T16:42:22.144Z] =================================================================================================================== 00:15:37.258 [2024-11-20T16:42:22.144Z] Total : 19457.17 76.00 0.00 0.00 3283.86 50.41 21475.64 00:15:37.828 16:42:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:37.828 16:42:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:37.828 16:42:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:37.828 16:42:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:37.828 16:42:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:37.828 { 00:15:37.828 "subsystems": [ 00:15:37.828 { 00:15:37.828 "subsystem": "bdev", 00:15:37.828 "config": [ 00:15:37.828 { 00:15:37.828 "params": { 00:15:37.828 "io_mechanism": "io_uring_cmd", 00:15:37.828 "conserve_cpu": true, 00:15:37.828 "filename": "/dev/ng0n1", 00:15:37.828 "name": "xnvme_bdev" 00:15:37.828 }, 00:15:37.828 "method": "bdev_xnvme_create" 00:15:37.828 }, 00:15:37.828 { 00:15:37.829 "method": "bdev_wait_for_examine" 00:15:37.829 } 00:15:37.829 ] 00:15:37.829 } 00:15:37.829 ] 00:15:37.829 } 00:15:37.829 [2024-11-20 16:42:22.653108] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:15:37.829 [2024-11-20 16:42:22.653423] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71548 ] 00:15:38.088 [2024-11-20 16:42:22.816133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.088 [2024-11-20 16:42:22.918927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.347 Running I/O for 5 seconds... 00:15:40.676 74176.00 IOPS, 289.75 MiB/s [2024-11-20T16:42:26.501Z] 75168.00 IOPS, 293.62 MiB/s [2024-11-20T16:42:27.440Z] 76160.00 IOPS, 297.50 MiB/s [2024-11-20T16:42:28.383Z] 76512.00 IOPS, 298.88 MiB/s [2024-11-20T16:42:28.383Z] 76352.00 IOPS, 298.25 MiB/s 00:15:43.498 Latency(us) 00:15:43.498 [2024-11-20T16:42:28.384Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:43.498 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:43.498 xnvme_bdev : 5.00 76323.50 298.14 0.00 0.00 835.11 437.96 2797.88 00:15:43.498 [2024-11-20T16:42:28.384Z] =================================================================================================================== 00:15:43.498 [2024-11-20T16:42:28.384Z] Total : 76323.50 298.14 0.00 0.00 835.11 437.96 2797.88 00:15:44.069 16:42:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:44.069 16:42:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:44.069 16:42:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:44.069 16:42:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:44.069 16:42:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:44.069 { 00:15:44.069 "subsystems": [ 00:15:44.069 { 00:15:44.069 "subsystem": "bdev", 00:15:44.069 "config": [ 00:15:44.069 { 00:15:44.069 "params": { 00:15:44.069 "io_mechanism": "io_uring_cmd", 00:15:44.069 "conserve_cpu": true, 00:15:44.069 "filename": "/dev/ng0n1", 00:15:44.069 "name": "xnvme_bdev" 00:15:44.069 }, 00:15:44.069 "method": "bdev_xnvme_create" 00:15:44.069 }, 00:15:44.069 { 00:15:44.069 "method": "bdev_wait_for_examine" 00:15:44.069 } 00:15:44.069 ] 00:15:44.069 } 00:15:44.069 ] 00:15:44.069 } 00:15:44.330 [2024-11-20 16:42:28.969798] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:15:44.330 [2024-11-20 16:42:28.969973] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71629 ] 00:15:44.330 [2024-11-20 16:42:29.133998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.591 [2024-11-20 16:42:29.296315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.852 Running I/O for 5 seconds... 00:15:46.790 608.00 IOPS, 2.38 MiB/s [2024-11-20T16:42:32.619Z] 756.00 IOPS, 2.95 MiB/s [2024-11-20T16:42:33.560Z] 624.67 IOPS, 2.44 MiB/s [2024-11-20T16:42:34.999Z] 575.75 IOPS, 2.25 MiB/s [2024-11-20T16:42:34.999Z] 538.40 IOPS, 2.10 MiB/s 00:15:50.113 Latency(us) 00:15:50.113 [2024-11-20T16:42:34.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.113 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:50.113 xnvme_bdev : 5.16 534.74 2.09 0.00 0.00 118112.98 51.59 580749.78 00:15:50.113 [2024-11-20T16:42:34.999Z] =================================================================================================================== 00:15:50.113 [2024-11-20T16:42:34.999Z] Total : 534.74 2.09 0.00 0.00 118112.98 51.59 580749.78 00:15:50.685 00:15:50.685 real 0m25.529s 00:15:50.685 user 0m21.588s 00:15:50.685 sys 0m2.686s 00:15:50.685 16:42:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:50.685 ************************************ 00:15:50.685 END TEST xnvme_bdevperf 00:15:50.685 ************************************ 00:15:50.685 16:42:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:50.685 16:42:35 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:50.685 16:42:35 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:50.685 16:42:35 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:50.685 16:42:35 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.685 ************************************ 00:15:50.685 START TEST xnvme_fio_plugin 00:15:50.685 ************************************ 00:15:50.685 16:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:50.685 16:42:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:50.685 16:42:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:50.685 16:42:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:50.685 16:42:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:50.685 16:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:50.685 16:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:50.685 16:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:50.685 16:42:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:50.685 16:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:50.685 16:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:50.685 16:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:50.685 16:42:35 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:50.685 16:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:50.685 16:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:50.685 16:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:50.685 16:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:50.685 16:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:50.685 16:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:50.685 16:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:50.685 16:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:50.685 16:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:50.685 16:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:50.685 16:42:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:50.685 { 00:15:50.685 "subsystems": [ 00:15:50.685 { 00:15:50.685 "subsystem": "bdev", 00:15:50.685 "config": [ 00:15:50.685 { 00:15:50.685 "params": { 00:15:50.685 "io_mechanism": "io_uring_cmd", 00:15:50.685 "conserve_cpu": true, 00:15:50.685 "filename": "/dev/ng0n1", 00:15:50.685 "name": "xnvme_bdev" 00:15:50.685 }, 00:15:50.686 "method": "bdev_xnvme_create" 00:15:50.686 }, 00:15:50.686 { 00:15:50.686 "method": "bdev_wait_for_examine" 00:15:50.686 } 00:15:50.686 ] 00:15:50.686 } 00:15:50.686 ] 00:15:50.686 } 00:15:50.949 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:50.949 fio-3.35 00:15:50.949 Starting 1 thread 00:15:57.530 00:15:57.530 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71747: Wed Nov 20 16:42:41 2024 00:15:57.530 read: IOPS=47.2k, BW=184MiB/s (193MB/s)(922MiB/5001msec) 00:15:57.530 slat (usec): min=2, max=379, avg= 3.46, stdev= 1.81 00:15:57.530 clat (usec): min=588, max=11124, avg=1217.90, stdev=272.17 00:15:57.530 lat (usec): min=591, max=11133, avg=1221.36, stdev=272.31 00:15:57.530 clat percentiles (usec): 00:15:57.530 | 1.00th=[ 791], 5.00th=[ 873], 10.00th=[ 922], 20.00th=[ 1004], 00:15:57.530 | 30.00th=[ 1074], 40.00th=[ 1139], 50.00th=[ 1188], 60.00th=[ 1254], 00:15:57.530 | 70.00th=[ 1319], 80.00th=[ 1385], 90.00th=[ 1516], 95.00th=[ 1647], 00:15:57.530 | 99.00th=[ 2024], 99.50th=[ 2278], 99.90th=[ 3064], 99.95th=[ 3294], 00:15:57.530 | 99.99th=[ 5342] 00:15:57.530 bw ( KiB/s): min=178024, max=205312, per=100.00%, avg=190774.56, stdev=7887.88, samples=9 00:15:57.530 iops : min=44506, max=51328, avg=47693.56, stdev=1971.68, samples=9 00:15:57.530 lat (usec) : 750=0.29%, 1000=19.56% 00:15:57.530 lat (msec) : 2=79.04%, 4=1.09%, 10=0.02%, 20=0.01% 00:15:57.530 cpu : usr=66.50%, sys=30.50%, ctx=17, majf=0, minf=762 00:15:57.530 IO depths : 1=1.4%, 2=3.0%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.3%, >=64=1.6% 00:15:57.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.530 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:57.530 issued rwts: total=236031,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.530 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:57.530 00:15:57.530 Run status group 0 (all jobs): 00:15:57.530 READ: bw=184MiB/s (193MB/s), 184MiB/s-184MiB/s (193MB/s-193MB/s), io=922MiB (967MB), run=5001-5001msec 00:15:57.530 ----------------------------------------------------- 00:15:57.530 Suppressions used: 00:15:57.530 count bytes template 00:15:57.530 1 11 /usr/src/fio/parse.c 00:15:57.530 1 8 libtcmalloc_minimal.so 00:15:57.530 1 904 libcrypto.so 00:15:57.530 ----------------------------------------------------- 00:15:57.530 00:15:57.530 16:42:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:57.530 16:42:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:57.530 16:42:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:57.530 16:42:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:57.530 16:42:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:57.530 16:42:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:57.530 16:42:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:57.530 16:42:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:57.530 16:42:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:57.531 16:42:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:57.531 16:42:42 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:57.531 16:42:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:57.531 16:42:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:57.531 16:42:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:57.531 16:42:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:57.531 16:42:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:57.531 16:42:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:57.531 16:42:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:57.531 16:42:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:57.531 16:42:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:57.531 16:42:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:57.531 { 00:15:57.531 "subsystems": [ 00:15:57.531 { 00:15:57.531 "subsystem": "bdev", 00:15:57.531 "config": [ 00:15:57.531 { 00:15:57.531 "params": { 00:15:57.531 "io_mechanism": "io_uring_cmd", 00:15:57.531 "conserve_cpu": true, 00:15:57.531 "filename": "/dev/ng0n1", 00:15:57.531 "name": "xnvme_bdev" 00:15:57.531 }, 00:15:57.531 "method": "bdev_xnvme_create" 00:15:57.531 }, 00:15:57.531 { 00:15:57.531 "method": "bdev_wait_for_examine" 00:15:57.531 } 00:15:57.531 ] 00:15:57.531 } 00:15:57.531 ] 00:15:57.531 } 00:15:57.791 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:57.791 fio-3.35 00:15:57.791 Starting 1 thread 00:16:04.461 00:16:04.461 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71838: Wed Nov 20 16:42:48 2024 00:16:04.461 write: IOPS=39.1k, BW=153MiB/s (160MB/s)(765MiB/5001msec); 0 zone resets 00:16:04.461 slat (usec): min=2, max=636, avg= 3.63, stdev= 3.88 00:16:04.461 clat (usec): min=59, max=30903, avg=1511.07, stdev=1079.90 00:16:04.461 lat (usec): min=62, max=30907, avg=1514.71, stdev=1079.97 00:16:04.461 clat percentiles (usec): 00:16:04.461 | 1.00th=[ 465], 5.00th=[ 807], 10.00th=[ 947], 20.00th=[ 1090], 00:16:04.461 | 30.00th=[ 1172], 40.00th=[ 1237], 50.00th=[ 1287], 60.00th=[ 1336], 00:16:04.461 | 70.00th=[ 1401], 80.00th=[ 1516], 90.00th=[ 2089], 95.00th=[ 3490], 00:16:04.461 | 99.00th=[ 5735], 99.50th=[ 6587], 99.90th=[11076], 99.95th=[14877], 00:16:04.461 | 99.99th=[28443] 00:16:04.462 bw ( KiB/s): min=139792, max=172336, per=100.00%, avg=162544.89, stdev=10460.41, samples=9 00:16:04.462 iops : min=34948, max=43084, avg=40636.22, stdev=2615.10, samples=9 00:16:04.462 lat (usec) : 100=0.01%, 250=0.08%, 500=1.16%, 750=2.69%, 1000=9.36% 00:16:04.462 lat (msec) : 2=76.10%, 4=6.82%, 10=3.65%, 20=0.10%, 50=0.03% 00:16:04.462 cpu : usr=80.06%, sys=14.04%, ctx=11, majf=0, minf=762 00:16:04.462 IO depths : 1=1.1%, 2=2.2%, 4=4.7%, 8=10.0%, 16=22.3%, 32=57.4%, >=64=2.4% 00:16:04.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.462 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.2%, 32=0.3%, 64=1.5%, >=64=0.0% 00:16:04.462 issued rwts: total=0,195771,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.462 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:04.462 00:16:04.462 Run status group 0 (all jobs): 00:16:04.462 WRITE: bw=153MiB/s (160MB/s), 153MiB/s-153MiB/s (160MB/s-160MB/s), io=765MiB (802MB), run=5001-5001msec 00:16:04.462 ----------------------------------------------------- 00:16:04.462 Suppressions used: 00:16:04.462 count bytes template 00:16:04.462 1 11 /usr/src/fio/parse.c 00:16:04.462 1 8 libtcmalloc_minimal.so 00:16:04.462 1 904 libcrypto.so 00:16:04.462 ----------------------------------------------------- 00:16:04.462 00:16:04.462 00:16:04.462 real 0m13.690s 00:16:04.462 user 0m10.177s 00:16:04.462 sys 0m2.746s 00:16:04.462 16:42:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:04.462 16:42:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:04.462 ************************************ 00:16:04.462 END TEST xnvme_fio_plugin 00:16:04.462 ************************************ 00:16:04.462 Process with pid 71331 is not found 00:16:04.462 16:42:49 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 71331 00:16:04.462 16:42:49 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71331 ']' 00:16:04.462 16:42:49 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 71331 00:16:04.462 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71331) - No such process 00:16:04.462 16:42:49 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 71331 is not found' 00:16:04.462 00:16:04.462 real 3m27.689s 00:16:04.462 user 2m5.813s 00:16:04.462 sys 1m8.645s 00:16:04.462 ************************************ 00:16:04.462 END TEST nvme_xnvme 00:16:04.462 ************************************ 00:16:04.462 16:42:49 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:04.462 16:42:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:04.462 16:42:49 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:16:04.462 16:42:49 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:04.462 16:42:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:04.462 16:42:49 -- common/autotest_common.sh@10 -- # set +x 00:16:04.463 ************************************ 00:16:04.463 START TEST blockdev_xnvme 00:16:04.463 ************************************ 00:16:04.463 16:42:49 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:16:04.463 * Looking for test storage... 00:16:04.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:04.463 16:42:49 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:04.463 16:42:49 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:16:04.463 16:42:49 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:04.723 16:42:49 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:04.723 16:42:49 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:04.723 16:42:49 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:04.723 16:42:49 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:04.723 16:42:49 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:04.723 16:42:49 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:04.723 16:42:49 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:04.723 16:42:49 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:04.723 16:42:49 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:04.723 16:42:49 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:04.723 16:42:49 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:04.723 16:42:49 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:04.723 16:42:49 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:04.723 16:42:49 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:16:04.723 16:42:49 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:04.723 16:42:49 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:04.723 16:42:49 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:04.723 16:42:49 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:04.723 16:42:49 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:04.723 16:42:49 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:04.723 16:42:49 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:04.723 16:42:49 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:04.723 16:42:49 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:04.724 16:42:49 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:04.724 16:42:49 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:04.724 16:42:49 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:04.724 16:42:49 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:04.724 16:42:49 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:04.724 16:42:49 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:16:04.724 16:42:49 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:04.724 16:42:49 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:04.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.724 --rc genhtml_branch_coverage=1 00:16:04.724 --rc genhtml_function_coverage=1 00:16:04.724 --rc genhtml_legend=1 00:16:04.724 --rc geninfo_all_blocks=1 00:16:04.724 --rc geninfo_unexecuted_blocks=1 00:16:04.724 00:16:04.724 ' 00:16:04.724 16:42:49 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:04.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.724 --rc genhtml_branch_coverage=1 00:16:04.724 --rc genhtml_function_coverage=1 00:16:04.724 --rc genhtml_legend=1 00:16:04.724 --rc geninfo_all_blocks=1 00:16:04.724 --rc geninfo_unexecuted_blocks=1 00:16:04.724 00:16:04.724 ' 00:16:04.724 16:42:49 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:04.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.724 --rc genhtml_branch_coverage=1 00:16:04.724 --rc genhtml_function_coverage=1 00:16:04.724 --rc genhtml_legend=1 00:16:04.724 --rc geninfo_all_blocks=1 00:16:04.724 --rc geninfo_unexecuted_blocks=1 00:16:04.724 00:16:04.724 ' 00:16:04.724 16:42:49 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:04.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.724 --rc genhtml_branch_coverage=1 00:16:04.724 --rc genhtml_function_coverage=1 00:16:04.724 --rc genhtml_legend=1 00:16:04.724 --rc geninfo_all_blocks=1 00:16:04.724 --rc geninfo_unexecuted_blocks=1 00:16:04.724 00:16:04.724 ' 00:16:04.724 16:42:49 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:04.724 16:42:49 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:16:04.724 16:42:49 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:16:04.724 16:42:49 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:04.724 16:42:49 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:16:04.724 16:42:49 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:16:04.724 16:42:49 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:16:04.724 16:42:49 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:16:04.724 16:42:49 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:16:04.724 16:42:49 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:16:04.724 16:42:49 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:16:04.724 16:42:49 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:16:04.724 16:42:49 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:16:04.724 16:42:49 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:16:04.724 16:42:49 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:16:04.724 16:42:49 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:16:04.724 16:42:49 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:16:04.724 16:42:49 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:16:04.724 16:42:49 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:16:04.724 16:42:49 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:16:04.724 16:42:49 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:16:04.724 16:42:49 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:16:04.724 16:42:49 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:16:04.724 16:42:49 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:16:04.724 16:42:49 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71971 00:16:04.724 16:42:49 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:04.724 16:42:49 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 71971 00:16:04.724 16:42:49 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:16:04.724 16:42:49 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 71971 ']' 00:16:04.724 16:42:49 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.724 16:42:49 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:04.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.724 16:42:49 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.724 16:42:49 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:04.724 16:42:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:04.724 [2024-11-20 16:42:49.473164] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:16:04.724 [2024-11-20 16:42:49.473583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71971 ] 00:16:04.983 [2024-11-20 16:42:49.631002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.983 [2024-11-20 16:42:49.736407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.552 16:42:50 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:05.552 16:42:50 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:16:05.552 16:42:50 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:16:05.552 16:42:50 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:16:05.552 16:42:50 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:16:05.552 16:42:50 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:16:05.552 16:42:50 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:06.119 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:06.411 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:16:06.411 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:16:06.411 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:16:06.411 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:16:06.669 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:16:06.669 16:42:51 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:06.669 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:06.669 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:16:06.669 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:06.669 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:06.669 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:06.669 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:16:06.669 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:06.669 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:06.669 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:06.669 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:16:06.669 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:06.670 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:06.670 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:06.670 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:16:06.670 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:06.670 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:06.670 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:06.670 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:16:06.670 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:06.670 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:06.670 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:06.670 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:16:06.670 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:06.670 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:06.670 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:16:06.670 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:16:06.670 16:42:51 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.670 16:42:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:06.670 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:16:06.670 nvme0n1 00:16:06.670 nvme0n2 00:16:06.670 nvme0n3 00:16:06.670 nvme1n1 00:16:06.670 nvme2n1 00:16:06.670 nvme3n1 00:16:06.670 16:42:51 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.670 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:16:06.670 16:42:51 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.670 16:42:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:06.670 16:42:51 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.670 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:16:06.670 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:16:06.670 16:42:51 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.670 16:42:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:06.670 16:42:51 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.670 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:16:06.670 16:42:51 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.670 16:42:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:06.670 16:42:51 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.670 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:16:06.670 16:42:51 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.670 16:42:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:06.670 16:42:51 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.670 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:16:06.670 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:16:06.670 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:16:06.670 16:42:51 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.670 16:42:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:06.670 16:42:51 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.670 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:16:06.670 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:16:06.670 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "c5acbb0f-6686-49c6-9138-32414e8638ee"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c5acbb0f-6686-49c6-9138-32414e8638ee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "5aa50518-0d7d-48f1-b462-c7db8b86478b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5aa50518-0d7d-48f1-b462-c7db8b86478b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "70c7e690-2407-48a2-a6f8-93fd8df5471d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "70c7e690-2407-48a2-a6f8-93fd8df5471d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "fdd52346-ea11-4a75-8b09-9133f81eca92"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "fdd52346-ea11-4a75-8b09-9133f81eca92",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "04e4a98a-940e-477c-9bce-011cd88ca47c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "04e4a98a-940e-477c-9bce-011cd88ca47c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "c5b73102-573f-4bee-ba30-7da21413a365"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "c5b73102-573f-4bee-ba30-7da21413a365",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:06.670 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:16:06.670 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:16:06.670 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:16:06.670 16:42:51 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 71971 00:16:06.670 16:42:51 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71971 ']' 00:16:06.670 16:42:51 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 71971 00:16:06.670 16:42:51 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:16:06.670 16:42:51 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:06.670 16:42:51 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71971 00:16:06.670 killing process with pid 71971 00:16:06.670 16:42:51 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:06.670 16:42:51 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:06.670 16:42:51 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71971' 00:16:06.670 16:42:51 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 71971 00:16:06.670 16:42:51 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 71971 00:16:08.581 16:42:53 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:08.582 16:42:53 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:16:08.582 16:42:53 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:08.582 16:42:53 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:08.582 16:42:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:08.582 ************************************ 00:16:08.582 START TEST bdev_hello_world 00:16:08.582 ************************************ 00:16:08.582 16:42:53 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:16:08.582 [2024-11-20 16:42:53.101162] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:16:08.582 [2024-11-20 16:42:53.101472] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72245 ] 00:16:08.582 [2024-11-20 16:42:53.261684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.582 [2024-11-20 16:42:53.363349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.840 [2024-11-20 16:42:53.695394] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:16:08.840 [2024-11-20 16:42:53.695443] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:16:08.840 [2024-11-20 16:42:53.695459] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:16:08.840 [2024-11-20 16:42:53.697302] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:16:08.840 [2024-11-20 16:42:53.697689] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:16:08.840 [2024-11-20 16:42:53.697715] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:16:08.840 [2024-11-20 16:42:53.697837] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:16:08.840 00:16:08.840 [2024-11-20 16:42:53.697857] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:16:09.779 ************************************ 00:16:09.780 END TEST bdev_hello_world 00:16:09.780 ************************************ 00:16:09.780 00:16:09.780 real 0m1.363s 00:16:09.780 user 0m1.079s 00:16:09.780 sys 0m0.171s 00:16:09.780 16:42:54 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:09.780 16:42:54 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:16:09.780 16:42:54 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:16:09.780 16:42:54 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:09.780 16:42:54 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:09.780 16:42:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:09.780 ************************************ 00:16:09.780 START TEST bdev_bounds 00:16:09.780 ************************************ 00:16:09.780 Process bdevio pid: 72282 00:16:09.780 16:42:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:16:09.780 16:42:54 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72282 00:16:09.780 16:42:54 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:16:09.780 16:42:54 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72282' 00:16:09.780 16:42:54 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72282 00:16:09.780 16:42:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72282 ']' 00:16:09.780 16:42:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.780 16:42:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:09.780 16:42:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.780 16:42:54 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:09.780 16:42:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:09.780 16:42:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:09.780 [2024-11-20 16:42:54.506240] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:16:09.780 [2024-11-20 16:42:54.506359] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72282 ] 00:16:09.780 [2024-11-20 16:42:54.661700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:10.038 [2024-11-20 16:42:54.764843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.038 [2024-11-20 16:42:54.765222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.038 [2024-11-20 16:42:54.765222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:10.605 16:42:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:10.605 16:42:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:16:10.605 16:42:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:16:10.605 I/O targets: 00:16:10.605 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:10.605 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:10.605 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:10.605 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:16:10.605 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:16:10.605 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:16:10.605 00:16:10.605 00:16:10.605 CUnit - A unit testing framework for C - Version 2.1-3 00:16:10.605 http://cunit.sourceforge.net/ 00:16:10.605 00:16:10.605 00:16:10.605 Suite: bdevio tests on: nvme3n1 00:16:10.605 Test: blockdev write read block ...passed 00:16:10.605 Test: blockdev write zeroes read block ...passed 00:16:10.605 Test: blockdev write zeroes read no split ...passed 00:16:10.605 Test: blockdev write zeroes read split ...passed 00:16:10.864 Test: blockdev write zeroes read split partial ...passed 00:16:10.864 Test: blockdev reset ...passed 00:16:10.864 Test: blockdev write read 8 blocks ...passed 00:16:10.864 Test: blockdev write read size > 128k ...passed 00:16:10.864 Test: blockdev write read invalid size ...passed 00:16:10.864 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:10.864 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:10.864 Test: blockdev write read max offset ...passed 00:16:10.864 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:10.864 Test: blockdev writev readv 8 blocks ...passed 00:16:10.864 Test: blockdev writev readv 30 x 1block ...passed 00:16:10.864 Test: blockdev writev readv block ...passed 00:16:10.864 Test: blockdev writev readv size > 128k ...passed 00:16:10.864 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:10.864 Test: blockdev comparev and writev ...passed 00:16:10.864 Test: blockdev nvme passthru rw ...passed 00:16:10.864 Test: blockdev nvme passthru vendor specific ...passed 00:16:10.864 Test: blockdev nvme admin passthru ...passed 00:16:10.864 Test: blockdev copy ...passed 00:16:10.864 Suite: bdevio tests on: nvme2n1 00:16:10.864 Test: blockdev write read block ...passed 00:16:10.864 Test: blockdev write zeroes read block ...passed 00:16:10.864 Test: blockdev write zeroes read no split ...passed 00:16:10.864 Test: blockdev write zeroes read split ...passed 00:16:10.864 Test: blockdev write zeroes read split partial ...passed 00:16:10.864 Test: blockdev reset ...passed 00:16:10.864 Test: blockdev write read 8 blocks ...passed 00:16:10.864 Test: blockdev write read size > 128k ...passed 00:16:10.864 Test: blockdev write read invalid size ...passed 00:16:10.864 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:10.864 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:10.864 Test: blockdev write read max offset ...passed 00:16:10.864 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:10.864 Test: blockdev writev readv 8 blocks ...passed 00:16:10.864 Test: blockdev writev readv 30 x 1block ...passed 00:16:10.864 Test: blockdev writev readv block ...passed 00:16:10.864 Test: blockdev writev readv size > 128k ...passed 00:16:10.864 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:10.864 Test: blockdev comparev and writev ...passed 00:16:10.864 Test: blockdev nvme passthru rw ...passed 00:16:10.864 Test: blockdev nvme passthru vendor specific ...passed 00:16:10.864 Test: blockdev nvme admin passthru ...passed 00:16:10.864 Test: blockdev copy ...passed 00:16:10.864 Suite: bdevio tests on: nvme1n1 00:16:10.864 Test: blockdev write read block ...passed 00:16:10.864 Test: blockdev write zeroes read block ...passed 00:16:10.864 Test: blockdev write zeroes read no split ...passed 00:16:10.864 Test: blockdev write zeroes read split ...passed 00:16:10.864 Test: blockdev write zeroes read split partial ...passed 00:16:10.864 Test: blockdev reset ...passed 00:16:10.864 Test: blockdev write read 8 blocks ...passed 00:16:10.864 Test: blockdev write read size > 128k ...passed 00:16:10.864 Test: blockdev write read invalid size ...passed 00:16:10.864 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:10.864 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:10.864 Test: blockdev write read max offset ...passed 00:16:10.864 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:10.864 Test: blockdev writev readv 8 blocks ...passed 00:16:10.864 Test: blockdev writev readv 30 x 1block ...passed 00:16:10.864 Test: blockdev writev readv block ...passed 00:16:10.864 Test: blockdev writev readv size > 128k ...passed 00:16:10.864 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:10.864 Test: blockdev comparev and writev ...passed 00:16:10.864 Test: blockdev nvme passthru rw ...passed 00:16:10.864 Test: blockdev nvme passthru vendor specific ...passed 00:16:10.864 Test: blockdev nvme admin passthru ...passed 00:16:10.864 Test: blockdev copy ...passed 00:16:10.864 Suite: bdevio tests on: nvme0n3 00:16:10.864 Test: blockdev write read block ...passed 00:16:10.864 Test: blockdev write zeroes read block ...passed 00:16:10.864 Test: blockdev write zeroes read no split ...passed 00:16:10.864 Test: blockdev write zeroes read split ...passed 00:16:10.864 Test: blockdev write zeroes read split partial ...passed 00:16:10.864 Test: blockdev reset ...passed 00:16:10.864 Test: blockdev write read 8 blocks ...passed 00:16:10.864 Test: blockdev write read size > 128k ...passed 00:16:10.864 Test: blockdev write read invalid size ...passed 00:16:10.864 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:10.864 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:10.864 Test: blockdev write read max offset ...passed 00:16:10.864 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:10.864 Test: blockdev writev readv 8 blocks ...passed 00:16:10.864 Test: blockdev writev readv 30 x 1block ...passed 00:16:10.864 Test: blockdev writev readv block ...passed 00:16:10.864 Test: blockdev writev readv size > 128k ...passed 00:16:10.864 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:10.864 Test: blockdev comparev and writev ...passed 00:16:10.864 Test: blockdev nvme passthru rw ...passed 00:16:10.864 Test: blockdev nvme passthru vendor specific ...passed 00:16:10.864 Test: blockdev nvme admin passthru ...passed 00:16:10.864 Test: blockdev copy ...passed 00:16:10.864 Suite: bdevio tests on: nvme0n2 00:16:10.864 Test: blockdev write read block ...passed 00:16:10.864 Test: blockdev write zeroes read block ...passed 00:16:10.864 Test: blockdev write zeroes read no split ...passed 00:16:10.864 Test: blockdev write zeroes read split ...passed 00:16:10.864 Test: blockdev write zeroes read split partial ...passed 00:16:10.864 Test: blockdev reset ...passed 00:16:10.864 Test: blockdev write read 8 blocks ...passed 00:16:10.864 Test: blockdev write read size > 128k ...passed 00:16:10.864 Test: blockdev write read invalid size ...passed 00:16:10.864 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:10.864 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:10.864 Test: blockdev write read max offset ...passed 00:16:10.864 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:10.864 Test: blockdev writev readv 8 blocks ...passed 00:16:10.864 Test: blockdev writev readv 30 x 1block ...passed 00:16:10.864 Test: blockdev writev readv block ...passed 00:16:10.864 Test: blockdev writev readv size > 128k ...passed 00:16:10.864 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:10.864 Test: blockdev comparev and writev ...passed 00:16:10.864 Test: blockdev nvme passthru rw ...passed 00:16:10.864 Test: blockdev nvme passthru vendor specific ...passed 00:16:10.864 Test: blockdev nvme admin passthru ...passed 00:16:10.864 Test: blockdev copy ...passed 00:16:10.864 Suite: bdevio tests on: nvme0n1 00:16:10.864 Test: blockdev write read block ...passed 00:16:10.864 Test: blockdev write zeroes read block ...passed 00:16:10.864 Test: blockdev write zeroes read no split ...passed 00:16:11.132 Test: blockdev write zeroes read split ...passed 00:16:11.132 Test: blockdev write zeroes read split partial ...passed 00:16:11.132 Test: blockdev reset ...passed 00:16:11.132 Test: blockdev write read 8 blocks ...passed 00:16:11.132 Test: blockdev write read size > 128k ...passed 00:16:11.132 Test: blockdev write read invalid size ...passed 00:16:11.132 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:11.132 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:11.132 Test: blockdev write read max offset ...passed 00:16:11.132 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:11.132 Test: blockdev writev readv 8 blocks ...passed 00:16:11.132 Test: blockdev writev readv 30 x 1block ...passed 00:16:11.132 Test: blockdev writev readv block ...passed 00:16:11.132 Test: blockdev writev readv size > 128k ...passed 00:16:11.132 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:11.132 Test: blockdev comparev and writev ...passed 00:16:11.132 Test: blockdev nvme passthru rw ...passed 00:16:11.132 Test: blockdev nvme passthru vendor specific ...passed 00:16:11.132 Test: blockdev nvme admin passthru ...passed 00:16:11.132 Test: blockdev copy ...passed 00:16:11.132 00:16:11.132 Run Summary: Type Total Ran Passed Failed Inactive 00:16:11.132 suites 6 6 n/a 0 0 00:16:11.132 tests 138 138 138 0 0 00:16:11.132 asserts 780 780 780 0 n/a 00:16:11.132 00:16:11.132 Elapsed time = 0.905 seconds 00:16:11.132 0 00:16:11.132 16:42:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72282 00:16:11.132 16:42:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72282 ']' 00:16:11.132 16:42:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72282 00:16:11.132 16:42:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:16:11.132 16:42:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:11.132 16:42:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72282 00:16:11.132 16:42:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:11.132 16:42:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:11.132 16:42:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72282' 00:16:11.132 killing process with pid 72282 00:16:11.132 16:42:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72282 00:16:11.132 16:42:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72282 00:16:11.725 16:42:56 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:16:11.725 00:16:11.725 real 0m2.108s 00:16:11.725 user 0m5.325s 00:16:11.725 sys 0m0.278s 00:16:11.725 16:42:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:11.725 16:42:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:11.725 ************************************ 00:16:11.725 END TEST bdev_bounds 00:16:11.725 ************************************ 00:16:11.725 16:42:56 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:16:11.725 16:42:56 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:11.725 16:42:56 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:11.725 16:42:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:11.983 ************************************ 00:16:11.983 START TEST bdev_nbd 00:16:11.983 ************************************ 00:16:11.983 16:42:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:16:11.983 16:42:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:16:11.983 16:42:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:16:11.983 16:42:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:11.983 16:42:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:11.983 16:42:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:11.983 16:42:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:16:11.983 16:42:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:16:11.983 16:42:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:16:11.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:11.983 16:42:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:16:11.983 16:42:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:16:11.983 16:42:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:16:11.983 16:42:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:11.983 16:42:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:16:11.983 16:42:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:11.983 16:42:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:16:11.983 16:42:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72337 00:16:11.983 16:42:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:16:11.983 16:42:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72337 /var/tmp/spdk-nbd.sock 00:16:11.983 16:42:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72337 ']' 00:16:11.983 16:42:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:11.983 16:42:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:11.983 16:42:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:11.983 16:42:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:11.983 16:42:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:11.983 16:42:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:11.983 [2024-11-20 16:42:56.689243] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:16:11.983 [2024-11-20 16:42:56.689527] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:11.983 [2024-11-20 16:42:56.852773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.241 [2024-11-20 16:42:56.954068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.810 16:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:12.810 16:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:16:12.810 16:42:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:16:12.810 16:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:12.810 16:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:12.810 16:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:16:12.810 16:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:16:12.810 16:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:12.810 16:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:12.810 16:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:16:12.810 16:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:16:12.810 16:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:16:12.810 16:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:16:12.811 16:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:12.811 16:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:16:13.069 16:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:16:13.069 16:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:16:13.069 16:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:16:13.069 16:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:13.069 16:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:13.069 16:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:13.069 16:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:13.070 16:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:13.070 16:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:13.070 16:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:13.070 16:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:13.070 16:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:13.070 1+0 records in 00:16:13.070 1+0 records out 00:16:13.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421144 s, 9.7 MB/s 00:16:13.070 16:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:13.070 16:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:13.070 16:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:13.070 16:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:13.070 16:42:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:13.070 16:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:13.070 16:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:13.070 16:42:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:16:13.329 16:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:16:13.329 16:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:16:13.329 16:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:16:13.329 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:13.329 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:13.329 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:13.329 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:13.329 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:13.329 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:13.329 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:13.329 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:13.329 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:13.329 1+0 records in 00:16:13.329 1+0 records out 00:16:13.329 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285175 s, 14.4 MB/s 00:16:13.329 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:13.329 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:13.329 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:13.329 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:13.329 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:13.329 16:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:13.329 16:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:13.329 16:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:16:13.587 16:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:16:13.587 16:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:16:13.587 16:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:16:13.587 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:16:13.587 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:13.587 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:13.587 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:13.587 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:16:13.587 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:13.587 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:13.587 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:13.587 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:13.587 1+0 records in 00:16:13.587 1+0 records out 00:16:13.587 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381453 s, 10.7 MB/s 00:16:13.587 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:13.587 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:13.587 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:13.587 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:13.587 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:13.587 16:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:13.587 16:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:13.587 16:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:16:13.845 16:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:16:13.845 16:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:16:13.845 16:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:16:13.845 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:16:13.845 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:13.845 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:13.845 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:13.845 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:16:13.845 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:13.845 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:13.845 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:13.845 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:13.845 1+0 records in 00:16:13.845 1+0 records out 00:16:13.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392127 s, 10.4 MB/s 00:16:13.845 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:13.845 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:13.845 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:13.845 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:13.845 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:13.845 16:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:13.845 16:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:13.845 16:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:16:14.104 16:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:16:14.104 16:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:16:14.104 16:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:16:14.104 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:16:14.104 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:14.104 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:14.104 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:14.104 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:16:14.104 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:14.104 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:14.104 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:14.104 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:14.104 1+0 records in 00:16:14.104 1+0 records out 00:16:14.104 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368086 s, 11.1 MB/s 00:16:14.104 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:14.104 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:14.104 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:14.104 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:14.104 16:42:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:14.104 16:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:14.104 16:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:14.104 16:42:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:16:14.362 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:16:14.362 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:16:14.362 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:16:14.362 16:42:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:16:14.362 16:42:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:14.362 16:42:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:14.362 16:42:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:14.363 16:42:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:16:14.363 16:42:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:14.363 16:42:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:14.363 16:42:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:14.363 16:42:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:14.363 1+0 records in 00:16:14.363 1+0 records out 00:16:14.363 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429581 s, 9.5 MB/s 00:16:14.363 16:42:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:14.363 16:42:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:14.363 16:42:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:14.363 16:42:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:14.363 16:42:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:14.363 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:14.363 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:14.363 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:14.620 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:16:14.620 { 00:16:14.620 "nbd_device": "/dev/nbd0", 00:16:14.620 "bdev_name": "nvme0n1" 00:16:14.620 }, 00:16:14.621 { 00:16:14.621 "nbd_device": "/dev/nbd1", 00:16:14.621 "bdev_name": "nvme0n2" 00:16:14.621 }, 00:16:14.621 { 00:16:14.621 "nbd_device": "/dev/nbd2", 00:16:14.621 "bdev_name": "nvme0n3" 00:16:14.621 }, 00:16:14.621 { 00:16:14.621 "nbd_device": "/dev/nbd3", 00:16:14.621 "bdev_name": "nvme1n1" 00:16:14.621 }, 00:16:14.621 { 00:16:14.621 "nbd_device": "/dev/nbd4", 00:16:14.621 "bdev_name": "nvme2n1" 00:16:14.621 }, 00:16:14.621 { 00:16:14.621 "nbd_device": "/dev/nbd5", 00:16:14.621 "bdev_name": "nvme3n1" 00:16:14.621 } 00:16:14.621 ]' 00:16:14.621 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:16:14.621 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:16:14.621 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:16:14.621 { 00:16:14.621 "nbd_device": "/dev/nbd0", 00:16:14.621 "bdev_name": "nvme0n1" 00:16:14.621 }, 00:16:14.621 { 00:16:14.621 "nbd_device": "/dev/nbd1", 00:16:14.621 "bdev_name": "nvme0n2" 00:16:14.621 }, 00:16:14.621 { 00:16:14.621 "nbd_device": "/dev/nbd2", 00:16:14.621 "bdev_name": "nvme0n3" 00:16:14.621 }, 00:16:14.621 { 00:16:14.621 "nbd_device": "/dev/nbd3", 00:16:14.621 "bdev_name": "nvme1n1" 00:16:14.621 }, 00:16:14.621 { 00:16:14.621 "nbd_device": "/dev/nbd4", 00:16:14.621 "bdev_name": "nvme2n1" 00:16:14.621 }, 00:16:14.621 { 00:16:14.621 "nbd_device": "/dev/nbd5", 00:16:14.621 "bdev_name": "nvme3n1" 00:16:14.621 } 00:16:14.621 ]' 00:16:14.621 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:16:14.621 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:14.621 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:16:14.621 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:14.621 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:14.621 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:14.621 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:14.879 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:14.879 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:14.879 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:14.879 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:14.879 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:14.879 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:14.879 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:14.879 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:14.879 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:14.879 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:15.137 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:15.137 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:15.137 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:15.137 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:15.137 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:15.137 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:15.137 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:15.137 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:15.137 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:15.137 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:16:15.137 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:16:15.137 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:16:15.137 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:16:15.137 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:15.137 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:15.137 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:16:15.137 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:15.137 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:15.137 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:15.137 16:42:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:16:15.397 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:16:15.397 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:16:15.397 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:16:15.397 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:15.397 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:15.397 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:16:15.397 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:15.397 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:15.397 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:15.397 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:16:15.659 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:16:15.659 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:16:15.659 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:16:15.659 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:15.659 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:15.659 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:16:15.659 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:15.659 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:15.659 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:15.659 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:16:15.917 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:16:15.917 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:16:15.917 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:16:15.917 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:15.917 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:15.917 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:16:15.917 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:15.917 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:15.917 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:15.917 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:15.917 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:16.176 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:16.176 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:16.176 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:16.176 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:16.176 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:16.176 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:16.176 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:16.176 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:16.176 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:16.176 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:16:16.176 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:16:16.176 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:16:16.176 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:16.176 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:16.176 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:16.176 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:16.176 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:16.176 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:16.176 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:16.176 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:16.176 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:16.176 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:16.176 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:16.176 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:16.176 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:16:16.176 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:16.176 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:16.176 16:43:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:16:16.435 /dev/nbd0 00:16:16.435 16:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:16.435 16:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:16.435 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:16.435 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:16.435 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:16.435 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:16.435 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:16.435 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:16.435 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:16.435 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:16.435 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:16.435 1+0 records in 00:16:16.435 1+0 records out 00:16:16.435 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304613 s, 13.4 MB/s 00:16:16.435 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:16.435 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:16.435 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:16.435 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:16.435 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:16.435 16:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:16.435 16:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:16.435 16:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:16:16.693 /dev/nbd1 00:16:16.693 16:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:16.693 16:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:16.693 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:16.693 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:16.693 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:16.693 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:16.694 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:16.694 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:16.694 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:16.694 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:16.694 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:16.694 1+0 records in 00:16:16.694 1+0 records out 00:16:16.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389019 s, 10.5 MB/s 00:16:16.694 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:16.694 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:16.694 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:16.694 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:16.694 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:16.694 16:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:16.694 16:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:16.694 16:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:16:16.951 /dev/nbd10 00:16:16.951 16:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:16:16.951 16:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:16:16.951 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:16:16.951 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:16.951 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:16.951 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:16.951 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:16:16.951 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:16.951 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:16.951 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:16.951 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:16.951 1+0 records in 00:16:16.951 1+0 records out 00:16:16.951 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000502439 s, 8.2 MB/s 00:16:16.951 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:16.951 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:16.951 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:16.951 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:16.951 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:16.951 16:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:16.951 16:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:16.951 16:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:16:17.209 /dev/nbd11 00:16:17.209 16:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:16:17.209 16:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:16:17.209 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:16:17.209 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:17.209 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:17.209 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:17.209 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:16:17.209 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:17.209 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:17.209 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:17.209 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:17.209 1+0 records in 00:16:17.209 1+0 records out 00:16:17.209 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343425 s, 11.9 MB/s 00:16:17.209 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:17.209 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:17.209 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:17.210 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:17.210 16:43:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:17.210 16:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:17.210 16:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:17.210 16:43:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:16:17.468 /dev/nbd12 00:16:17.468 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:16:17.468 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:16:17.468 16:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:16:17.468 16:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:17.468 16:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:17.468 16:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:17.468 16:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:16:17.468 16:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:17.468 16:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:17.468 16:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:17.468 16:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:17.468 1+0 records in 00:16:17.468 1+0 records out 00:16:17.468 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000581884 s, 7.0 MB/s 00:16:17.468 16:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:17.468 16:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:17.468 16:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:17.468 16:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:17.468 16:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:17.468 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:17.468 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:17.468 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:16:17.726 /dev/nbd13 00:16:17.726 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:16:17.726 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:16:17.726 16:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:16:17.726 16:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:17.726 16:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:17.726 16:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:17.726 16:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:16:17.726 16:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:17.726 16:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:17.726 16:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:17.726 16:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:17.726 1+0 records in 00:16:17.726 1+0 records out 00:16:17.726 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420144 s, 9.7 MB/s 00:16:17.726 16:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:17.726 16:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:17.726 16:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:17.726 16:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:17.726 16:43:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:17.726 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:17.726 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:17.726 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:17.726 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:17.726 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:17.984 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:17.984 { 00:16:17.984 "nbd_device": "/dev/nbd0", 00:16:17.984 "bdev_name": "nvme0n1" 00:16:17.984 }, 00:16:17.984 { 00:16:17.984 "nbd_device": "/dev/nbd1", 00:16:17.984 "bdev_name": "nvme0n2" 00:16:17.984 }, 00:16:17.984 { 00:16:17.984 "nbd_device": "/dev/nbd10", 00:16:17.984 "bdev_name": "nvme0n3" 00:16:17.984 }, 00:16:17.984 { 00:16:17.984 "nbd_device": "/dev/nbd11", 00:16:17.984 "bdev_name": "nvme1n1" 00:16:17.984 }, 00:16:17.984 { 00:16:17.984 "nbd_device": "/dev/nbd12", 00:16:17.984 "bdev_name": "nvme2n1" 00:16:17.984 }, 00:16:17.984 { 00:16:17.984 "nbd_device": "/dev/nbd13", 00:16:17.984 "bdev_name": "nvme3n1" 00:16:17.984 } 00:16:17.984 ]' 00:16:17.984 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:17.984 { 00:16:17.984 "nbd_device": "/dev/nbd0", 00:16:17.984 "bdev_name": "nvme0n1" 00:16:17.984 }, 00:16:17.984 { 00:16:17.984 "nbd_device": "/dev/nbd1", 00:16:17.984 "bdev_name": "nvme0n2" 00:16:17.984 }, 00:16:17.984 { 00:16:17.984 "nbd_device": "/dev/nbd10", 00:16:17.984 "bdev_name": "nvme0n3" 00:16:17.984 }, 00:16:17.984 { 00:16:17.984 "nbd_device": "/dev/nbd11", 00:16:17.984 "bdev_name": "nvme1n1" 00:16:17.984 }, 00:16:17.984 { 00:16:17.984 "nbd_device": "/dev/nbd12", 00:16:17.984 "bdev_name": "nvme2n1" 00:16:17.984 }, 00:16:17.984 { 00:16:17.984 "nbd_device": "/dev/nbd13", 00:16:17.984 "bdev_name": "nvme3n1" 00:16:17.984 } 00:16:17.984 ]' 00:16:17.984 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:17.984 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:17.984 /dev/nbd1 00:16:17.984 /dev/nbd10 00:16:17.984 /dev/nbd11 00:16:17.984 /dev/nbd12 00:16:17.984 /dev/nbd13' 00:16:17.984 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:17.984 /dev/nbd1 00:16:17.984 /dev/nbd10 00:16:17.984 /dev/nbd11 00:16:17.984 /dev/nbd12 00:16:17.984 /dev/nbd13' 00:16:17.984 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:17.984 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:16:17.984 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:16:17.984 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:16:17.984 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:16:17.984 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:16:17.984 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:17.984 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:17.984 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:17.984 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:17.984 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:17.984 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:16:17.984 256+0 records in 00:16:17.984 256+0 records out 00:16:17.984 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.006256 s, 168 MB/s 00:16:17.984 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:17.984 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:17.984 256+0 records in 00:16:17.984 256+0 records out 00:16:17.984 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0585305 s, 17.9 MB/s 00:16:17.984 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:17.984 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:17.984 256+0 records in 00:16:17.984 256+0 records out 00:16:17.984 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.067087 s, 15.6 MB/s 00:16:17.984 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:17.984 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:16:18.242 256+0 records in 00:16:18.242 256+0 records out 00:16:18.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0621139 s, 16.9 MB/s 00:16:18.242 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:18.242 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:16:18.242 256+0 records in 00:16:18.242 256+0 records out 00:16:18.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0598642 s, 17.5 MB/s 00:16:18.242 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:18.242 16:43:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:16:18.242 256+0 records in 00:16:18.242 256+0 records out 00:16:18.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0730315 s, 14.4 MB/s 00:16:18.242 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:18.242 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:16:18.242 256+0 records in 00:16:18.242 256+0 records out 00:16:18.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.060241 s, 17.4 MB/s 00:16:18.242 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:16:18.242 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:18.242 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:18.243 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:18.243 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:18.243 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:18.243 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:18.243 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:18.243 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:16:18.243 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:18.243 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:16:18.243 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:18.243 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:16:18.243 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:18.243 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:16:18.243 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:18.243 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:16:18.502 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:18.502 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:16:18.502 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:18.502 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:18.502 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:18.502 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:18.502 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:18.502 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:18.502 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:18.502 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:18.502 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:18.502 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:18.502 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:18.502 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:18.502 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:18.502 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:18.502 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:18.502 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:18.502 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:18.502 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:18.760 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:18.760 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:18.760 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:18.760 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:18.760 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:18.760 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:18.760 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:18.760 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:18.760 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:18.760 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:16:19.019 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:16:19.019 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:16:19.019 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:16:19.019 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:19.019 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:19.019 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:16:19.019 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:19.019 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:19.019 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:19.019 16:43:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:16:19.278 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:16:19.278 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:16:19.278 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:16:19.278 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:19.278 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:19.278 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:16:19.278 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:19.278 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:19.278 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:19.278 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:16:19.537 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:16:19.537 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:16:19.537 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:16:19.537 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:19.537 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:19.537 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:16:19.537 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:19.537 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:19.537 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:19.537 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:16:19.796 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:16:19.796 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:16:19.796 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:16:19.796 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:19.796 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:19.796 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:16:19.796 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:19.796 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:19.796 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:19.796 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:19.796 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:19.796 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:19.796 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:19.796 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:19.796 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:19.796 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:19.796 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:19.796 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:19.796 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:19.796 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:19.796 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:19.796 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:19.796 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:19.796 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:19.796 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:19.796 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:16:19.796 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:20.054 malloc_lvol_verify 00:16:20.054 16:43:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:20.312 87fa24c2-0d3f-483d-a5f4-a074024e5230 00:16:20.312 16:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:20.570 706698ca-4c11-4048-b8a4-fcbc0f6f22d9 00:16:20.570 16:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:20.828 /dev/nbd0 00:16:20.828 16:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:16:20.828 16:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:16:20.828 16:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:16:20.828 16:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:16:20.828 16:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:16:20.828 mke2fs 1.47.0 (5-Feb-2023) 00:16:20.828 Discarding device blocks: 0/4096 done 00:16:20.828 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:20.828 00:16:20.828 Allocating group tables: 0/1 done 00:16:20.828 Writing inode tables: 0/1 done 00:16:20.828 Creating journal (1024 blocks): done 00:16:20.828 Writing superblocks and filesystem accounting information: 0/1 done 00:16:20.828 00:16:20.828 16:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:20.828 16:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:20.828 16:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:20.828 16:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:20.828 16:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:20.828 16:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:20.828 16:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:21.087 16:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:21.087 16:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:21.087 16:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:21.087 16:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:21.087 16:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:21.087 16:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:21.087 16:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:21.087 16:43:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:21.087 16:43:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72337 00:16:21.087 16:43:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72337 ']' 00:16:21.087 16:43:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72337 00:16:21.087 16:43:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:16:21.087 16:43:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:21.087 16:43:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72337 00:16:21.087 16:43:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:21.087 16:43:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:21.087 killing process with pid 72337 00:16:21.087 16:43:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72337' 00:16:21.087 16:43:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72337 00:16:21.087 16:43:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72337 00:16:22.021 16:43:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:16:22.021 00:16:22.021 real 0m9.957s 00:16:22.021 user 0m14.311s 00:16:22.021 sys 0m3.274s 00:16:22.021 ************************************ 00:16:22.021 END TEST bdev_nbd 00:16:22.021 ************************************ 00:16:22.021 16:43:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:22.021 16:43:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:22.021 16:43:06 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:16:22.021 16:43:06 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:16:22.021 16:43:06 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:16:22.021 16:43:06 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:16:22.021 16:43:06 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:22.021 16:43:06 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:22.021 16:43:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:22.021 ************************************ 00:16:22.021 START TEST bdev_fio 00:16:22.021 ************************************ 00:16:22.021 16:43:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:16:22.021 16:43:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:16:22.021 16:43:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:16:22.021 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:16:22.021 16:43:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:16:22.021 16:43:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:16:22.021 16:43:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:16:22.021 16:43:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:16:22.021 16:43:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:16:22.021 16:43:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:22.021 16:43:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:16:22.021 16:43:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:16:22.021 16:43:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:22.021 16:43:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:22.021 16:43:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:22.021 16:43:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:16:22.021 16:43:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:22.021 16:43:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:22.021 16:43:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:22.021 16:43:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:16:22.021 16:43:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:16:22.021 16:43:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:16:22.021 16:43:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:16:22.021 16:43:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:16:22.021 16:43:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:16:22.021 16:43:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:22.021 16:43:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:16:22.021 16:43:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:16:22.021 16:43:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:22.021 16:43:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:16:22.021 16:43:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:16:22.021 16:43:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:22.021 16:43:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:16:22.021 16:43:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:16:22.021 16:43:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:22.022 16:43:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:16:22.022 16:43:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:16:22.022 16:43:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:22.022 16:43:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:16:22.022 16:43:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:16:22.022 16:43:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:22.022 16:43:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:16:22.022 16:43:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:16:22.022 16:43:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:16:22.022 16:43:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:22.022 16:43:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:16:22.022 16:43:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:22.022 16:43:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:22.022 ************************************ 00:16:22.022 START TEST bdev_fio_rw_verify 00:16:22.022 ************************************ 00:16:22.022 16:43:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:22.022 16:43:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:22.022 16:43:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:22.022 16:43:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:22.022 16:43:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:22.022 16:43:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:22.022 16:43:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:16:22.022 16:43:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:22.022 16:43:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:22.022 16:43:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:22.022 16:43:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:22.022 16:43:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:16:22.022 16:43:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:22.022 16:43:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:22.022 16:43:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:16:22.022 16:43:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:22.022 16:43:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:22.022 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:22.022 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:22.022 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:22.022 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:22.022 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:22.022 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:22.022 fio-3.35 00:16:22.022 Starting 6 threads 00:16:34.220 00:16:34.220 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72734: Wed Nov 20 16:43:17 2024 00:16:34.220 read: IOPS=39.8k, BW=155MiB/s (163MB/s)(1555MiB/10004msec) 00:16:34.220 slat (usec): min=2, max=751, avg= 4.50, stdev= 3.50 00:16:34.220 clat (usec): min=75, max=261610, avg=416.85, stdev=1200.24 00:16:34.220 lat (usec): min=77, max=261620, avg=421.35, stdev=1200.35 00:16:34.220 clat percentiles (usec): 00:16:34.220 | 50.000th=[ 367], 99.000th=[ 1369], 99.900th=[ 2966], 00:16:34.220 | 99.990th=[ 4047], 99.999th=[261096] 00:16:34.220 write: IOPS=40.2k, BW=157MiB/s (165MB/s)(1572MiB/10004msec); 0 zone resets 00:16:34.220 slat (usec): min=10, max=3284, avg=24.06, stdev=44.12 00:16:34.220 clat (usec): min=51, max=6644, avg=564.71, stdev=303.23 00:16:34.220 lat (usec): min=65, max=6725, avg=588.77, stdev=309.12 00:16:34.220 clat percentiles (usec): 00:16:34.220 | 50.000th=[ 523], 99.000th=[ 1598], 99.900th=[ 3261], 99.990th=[ 5342], 00:16:34.220 | 99.999th=[ 6587] 00:16:34.220 bw ( KiB/s): min=101875, max=199535, per=99.59%, avg=160298.63, stdev=4592.73, samples=114 00:16:34.220 iops : min=25467, max=49883, avg=40074.11, stdev=1148.24, samples=114 00:16:34.220 lat (usec) : 100=0.13%, 250=16.44%, 500=43.45%, 750=27.75%, 1000=8.21% 00:16:34.220 lat (msec) : 2=3.57%, 4=0.44%, 10=0.02%, 500=0.01% 00:16:34.220 cpu : usr=46.57%, sys=34.06%, ctx=10021, majf=0, minf=32191 00:16:34.220 IO depths : 1=11.2%, 2=23.4%, 4=51.4%, 8=13.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:34.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.220 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.220 issued rwts: total=397986,402539,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.220 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:34.220 00:16:34.220 Run status group 0 (all jobs): 00:16:34.220 READ: bw=155MiB/s (163MB/s), 155MiB/s-155MiB/s (163MB/s-163MB/s), io=1555MiB (1630MB), run=10004-10004msec 00:16:34.220 WRITE: bw=157MiB/s (165MB/s), 157MiB/s-157MiB/s (165MB/s-165MB/s), io=1572MiB (1649MB), run=10004-10004msec 00:16:34.220 ----------------------------------------------------- 00:16:34.220 Suppressions used: 00:16:34.220 count bytes template 00:16:34.220 6 48 /usr/src/fio/parse.c 00:16:34.220 4284 411264 /usr/src/fio/iolog.c 00:16:34.220 1 8 libtcmalloc_minimal.so 00:16:34.220 1 904 libcrypto.so 00:16:34.220 ----------------------------------------------------- 00:16:34.220 00:16:34.220 00:16:34.220 real 0m11.910s 00:16:34.220 user 0m29.418s 00:16:34.220 sys 0m20.721s 00:16:34.220 16:43:18 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:34.220 16:43:18 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:34.220 ************************************ 00:16:34.220 END TEST bdev_fio_rw_verify 00:16:34.220 ************************************ 00:16:34.220 16:43:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:34.220 16:43:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:34.220 16:43:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:34.220 16:43:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:34.220 16:43:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:16:34.220 16:43:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:16:34.220 16:43:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:34.220 16:43:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:34.220 16:43:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:34.220 16:43:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:16:34.220 16:43:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:34.220 16:43:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:34.220 16:43:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:34.220 16:43:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:16:34.220 16:43:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:16:34.220 16:43:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:16:34.220 16:43:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:34.220 16:43:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "c5acbb0f-6686-49c6-9138-32414e8638ee"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c5acbb0f-6686-49c6-9138-32414e8638ee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "5aa50518-0d7d-48f1-b462-c7db8b86478b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5aa50518-0d7d-48f1-b462-c7db8b86478b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "70c7e690-2407-48a2-a6f8-93fd8df5471d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "70c7e690-2407-48a2-a6f8-93fd8df5471d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "fdd52346-ea11-4a75-8b09-9133f81eca92"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "fdd52346-ea11-4a75-8b09-9133f81eca92",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "04e4a98a-940e-477c-9bce-011cd88ca47c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "04e4a98a-940e-477c-9bce-011cd88ca47c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "c5b73102-573f-4bee-ba30-7da21413a365"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "c5b73102-573f-4bee-ba30-7da21413a365",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:34.220 16:43:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:34.220 16:43:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:34.220 16:43:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:34.220 /home/vagrant/spdk_repo/spdk 00:16:34.220 16:43:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:34.220 16:43:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:34.220 00:16:34.220 real 0m12.047s 00:16:34.220 user 0m29.487s 00:16:34.220 sys 0m20.789s 00:16:34.221 16:43:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:34.221 ************************************ 00:16:34.221 END TEST bdev_fio 00:16:34.221 16:43:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:34.221 ************************************ 00:16:34.221 16:43:18 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:34.221 16:43:18 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:34.221 16:43:18 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:34.221 16:43:18 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:34.221 16:43:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:34.221 ************************************ 00:16:34.221 START TEST bdev_verify 00:16:34.221 ************************************ 00:16:34.221 16:43:18 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:34.221 [2024-11-20 16:43:18.753431] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:16:34.221 [2024-11-20 16:43:18.753555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72907 ] 00:16:34.221 [2024-11-20 16:43:18.911538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:34.221 [2024-11-20 16:43:19.015870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.221 [2024-11-20 16:43:19.015897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.785 Running I/O for 5 seconds... 00:16:36.652 25056.00 IOPS, 97.88 MiB/s [2024-11-20T16:43:22.912Z] 23952.00 IOPS, 93.56 MiB/s [2024-11-20T16:43:23.844Z] 24128.00 IOPS, 94.25 MiB/s [2024-11-20T16:43:24.780Z] 24168.00 IOPS, 94.41 MiB/s [2024-11-20T16:43:24.780Z] 24089.60 IOPS, 94.10 MiB/s 00:16:39.894 Latency(us) 00:16:39.894 [2024-11-20T16:43:24.780Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.894 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:39.894 Verification LBA range: start 0x0 length 0x80000 00:16:39.894 nvme0n1 : 5.03 1729.11 6.75 0.00 0.00 73889.74 16232.76 66544.25 00:16:39.894 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:39.894 Verification LBA range: start 0x80000 length 0x80000 00:16:39.894 nvme0n1 : 5.07 1742.69 6.81 0.00 0.00 73305.29 11191.53 72997.02 00:16:39.894 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:39.894 Verification LBA range: start 0x0 length 0x80000 00:16:39.894 nvme0n2 : 5.02 1732.21 6.77 0.00 0.00 73610.18 7763.50 72593.72 00:16:39.894 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:39.894 Verification LBA range: start 0x80000 length 0x80000 00:16:39.894 nvme0n2 : 5.07 1742.12 6.81 0.00 0.00 73160.86 15022.87 67754.14 00:16:39.894 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:39.894 Verification LBA range: start 0x0 length 0x80000 00:16:39.894 nvme0n3 : 5.06 1745.88 6.82 0.00 0.00 72890.11 10536.17 65334.35 00:16:39.894 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:39.894 Verification LBA range: start 0x80000 length 0x80000 00:16:39.894 nvme0n3 : 5.06 1746.36 6.82 0.00 0.00 72815.70 8217.21 58881.58 00:16:39.894 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:39.894 Verification LBA range: start 0x0 length 0x20000 00:16:39.894 nvme1n1 : 5.08 1738.17 6.79 0.00 0.00 73091.68 9527.93 66947.54 00:16:39.894 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:39.894 Verification LBA range: start 0x20000 length 0x20000 00:16:39.894 nvme1n1 : 5.08 1762.48 6.88 0.00 0.00 71983.65 3957.37 67754.14 00:16:39.894 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:39.894 Verification LBA range: start 0x0 length 0xbd0bd 00:16:39.894 nvme2n1 : 5.08 3164.32 12.36 0.00 0.00 40044.57 3730.51 61301.37 00:16:39.894 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:39.894 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:39.894 nvme2n1 : 5.08 3209.58 12.54 0.00 0.00 39386.49 3049.94 60494.77 00:16:39.894 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:39.894 Verification LBA range: start 0x0 length 0xa0000 00:16:39.894 nvme3n1 : 5.07 1743.01 6.81 0.00 0.00 72588.14 8116.38 64931.05 00:16:39.894 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:39.894 Verification LBA range: start 0xa0000 length 0xa0000 00:16:39.894 nvme3n1 : 5.08 1739.59 6.80 0.00 0.00 72632.62 8418.86 70173.93 00:16:39.894 [2024-11-20T16:43:24.780Z] =================================================================================================================== 00:16:39.894 [2024-11-20T16:43:24.780Z] Total : 23795.51 92.95 0.00 0.00 64054.82 3049.94 72997.02 00:16:40.459 00:16:40.459 real 0m6.581s 00:16:40.459 user 0m10.409s 00:16:40.459 sys 0m1.763s 00:16:40.459 16:43:25 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:40.459 16:43:25 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:40.459 ************************************ 00:16:40.459 END TEST bdev_verify 00:16:40.459 ************************************ 00:16:40.459 16:43:25 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:40.459 16:43:25 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:40.459 16:43:25 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:40.459 16:43:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:40.459 ************************************ 00:16:40.459 START TEST bdev_verify_big_io 00:16:40.459 ************************************ 00:16:40.459 16:43:25 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:40.717 [2024-11-20 16:43:25.373529] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:16:40.717 [2024-11-20 16:43:25.373644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73007 ] 00:16:40.717 [2024-11-20 16:43:25.535803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:41.002 [2024-11-20 16:43:25.639680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.002 [2024-11-20 16:43:25.639805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.259 Running I/O for 5 seconds... 00:16:47.094 912.00 IOPS, 57.00 MiB/s [2024-11-20T16:43:32.238Z] 2211.00 IOPS, 138.19 MiB/s [2024-11-20T16:43:32.238Z] 2807.33 IOPS, 175.46 MiB/s 00:16:47.352 Latency(us) 00:16:47.352 [2024-11-20T16:43:32.238Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.352 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:47.352 Verification LBA range: start 0x0 length 0x8000 00:16:47.352 nvme0n1 : 5.35 107.63 6.73 0.00 0.00 1151587.25 125022.52 1910021.51 00:16:47.352 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:47.352 Verification LBA range: start 0x8000 length 0x8000 00:16:47.352 nvme0n1 : 5.44 117.66 7.35 0.00 0.00 1066417.55 114536.76 1206669.00 00:16:47.352 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:47.352 Verification LBA range: start 0x0 length 0x8000 00:16:47.352 nvme0n2 : 5.69 112.55 7.03 0.00 0.00 1017526.90 6906.49 1013085.74 00:16:47.352 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:47.352 Verification LBA range: start 0x8000 length 0x8000 00:16:47.352 nvme0n2 : 5.78 114.45 7.15 0.00 0.00 1025980.32 136314.88 1619646.62 00:16:47.352 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:47.352 Verification LBA range: start 0x0 length 0x8000 00:16:47.352 nvme0n3 : 5.99 141.58 8.85 0.00 0.00 818245.02 15022.87 806596.92 00:16:47.352 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:47.352 Verification LBA range: start 0x8000 length 0x8000 00:16:47.352 nvme0n3 : 6.03 90.25 5.64 0.00 0.00 1275281.63 6956.90 1961643.72 00:16:47.352 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:47.352 Verification LBA range: start 0x0 length 0x2000 00:16:47.352 nvme1n1 : 5.90 97.61 6.10 0.00 0.00 1125083.68 206488.81 2064888.12 00:16:47.352 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:47.352 Verification LBA range: start 0x2000 length 0x2000 00:16:47.352 nvme1n1 : 5.84 142.42 8.90 0.00 0.00 787389.11 38716.65 1509949.44 00:16:47.352 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:47.352 Verification LBA range: start 0x0 length 0xbd0b 00:16:47.352 nvme2n1 : 5.99 144.17 9.01 0.00 0.00 747399.86 7662.67 2529487.95 00:16:47.352 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:47.352 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:47.352 nvme2n1 : 6.02 138.13 8.63 0.00 0.00 785813.83 2495.41 2387526.89 00:16:47.352 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:47.352 Verification LBA range: start 0x0 length 0xa000 00:16:47.352 nvme3n1 : 6.00 117.35 7.33 0.00 0.00 889824.38 1310.72 2271376.94 00:16:47.352 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:47.352 Verification LBA range: start 0xa000 length 0xa000 00:16:47.352 nvme3n1 : 6.04 145.80 9.11 0.00 0.00 715879.38 513.58 1367988.38 00:16:47.352 [2024-11-20T16:43:32.238Z] =================================================================================================================== 00:16:47.352 [2024-11-20T16:43:32.238Z] Total : 1469.59 91.85 0.00 0.00 921490.98 513.58 2529487.95 00:16:48.287 00:16:48.287 real 0m7.707s 00:16:48.287 user 0m14.308s 00:16:48.287 sys 0m0.348s 00:16:48.287 16:43:33 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:48.287 16:43:33 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:48.287 ************************************ 00:16:48.287 END TEST bdev_verify_big_io 00:16:48.287 ************************************ 00:16:48.287 16:43:33 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:48.287 16:43:33 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:48.287 16:43:33 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:48.287 16:43:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:48.287 ************************************ 00:16:48.287 START TEST bdev_write_zeroes 00:16:48.287 ************************************ 00:16:48.287 16:43:33 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:48.287 [2024-11-20 16:43:33.126089] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:16:48.287 [2024-11-20 16:43:33.126205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73111 ] 00:16:48.546 [2024-11-20 16:43:33.286099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.546 [2024-11-20 16:43:33.385597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.113 Running I/O for 1 seconds... 00:16:50.047 76512.00 IOPS, 298.88 MiB/s 00:16:50.047 Latency(us) 00:16:50.047 [2024-11-20T16:43:34.933Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.047 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:50.047 nvme0n1 : 1.02 10938.94 42.73 0.00 0.00 11690.71 7158.55 22181.42 00:16:50.047 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:50.047 nvme0n2 : 1.02 10926.38 42.68 0.00 0.00 11696.38 7208.96 22483.89 00:16:50.047 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:50.047 nvme0n3 : 1.02 10914.09 42.63 0.00 0.00 11700.45 7309.78 22887.19 00:16:50.047 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:50.047 nvme1n1 : 1.02 10901.88 42.59 0.00 0.00 11705.46 7410.61 23189.66 00:16:50.047 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:50.047 nvme2n1 : 1.03 21186.49 82.76 0.00 0.00 6016.25 3276.80 18955.03 00:16:50.047 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:50.047 nvme3n1 : 1.03 10846.17 42.37 0.00 0.00 11711.00 6150.30 24298.73 00:16:50.047 [2024-11-20T16:43:34.933Z] =================================================================================================================== 00:16:50.047 [2024-11-20T16:43:34.933Z] Total : 75713.95 295.76 0.00 0.00 10105.18 3276.80 24298.73 00:16:50.613 00:16:50.613 real 0m2.434s 00:16:50.613 user 0m1.686s 00:16:50.613 sys 0m0.579s 00:16:50.613 16:43:35 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:50.613 16:43:35 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:50.613 ************************************ 00:16:50.613 END TEST bdev_write_zeroes 00:16:50.613 ************************************ 00:16:50.872 16:43:35 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:50.872 16:43:35 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:50.872 16:43:35 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:50.872 16:43:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:50.872 ************************************ 00:16:50.872 START TEST bdev_json_nonenclosed 00:16:50.872 ************************************ 00:16:50.872 16:43:35 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:50.872 [2024-11-20 16:43:35.605786] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:16:50.872 [2024-11-20 16:43:35.605907] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73159 ] 00:16:51.130 [2024-11-20 16:43:35.763903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.130 [2024-11-20 16:43:35.863022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.130 [2024-11-20 16:43:35.863102] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:51.130 [2024-11-20 16:43:35.863118] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:51.130 [2024-11-20 16:43:35.863128] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:51.388 00:16:51.388 real 0m0.502s 00:16:51.388 user 0m0.308s 00:16:51.388 sys 0m0.090s 00:16:51.388 16:43:36 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:51.388 16:43:36 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:51.388 ************************************ 00:16:51.388 END TEST bdev_json_nonenclosed 00:16:51.388 ************************************ 00:16:51.388 16:43:36 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:51.388 16:43:36 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:51.388 16:43:36 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:51.388 16:43:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:51.388 ************************************ 00:16:51.388 START TEST bdev_json_nonarray 00:16:51.388 ************************************ 00:16:51.388 16:43:36 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:51.388 [2024-11-20 16:43:36.148849] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:16:51.388 [2024-11-20 16:43:36.148963] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73184 ] 00:16:51.646 [2024-11-20 16:43:36.309765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.646 [2024-11-20 16:43:36.408283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.646 [2024-11-20 16:43:36.408370] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:51.646 [2024-11-20 16:43:36.408398] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:51.646 [2024-11-20 16:43:36.408407] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:51.904 00:16:51.904 real 0m0.502s 00:16:51.904 user 0m0.308s 00:16:51.904 sys 0m0.089s 00:16:51.904 16:43:36 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:51.904 16:43:36 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:51.904 ************************************ 00:16:51.904 END TEST bdev_json_nonarray 00:16:51.904 ************************************ 00:16:51.904 16:43:36 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:16:51.904 16:43:36 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:16:51.904 16:43:36 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:16:51.904 16:43:36 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:16:51.904 16:43:36 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:16:51.904 16:43:36 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:51.904 16:43:36 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:51.904 16:43:36 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:51.904 16:43:36 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:51.904 16:43:36 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:51.904 16:43:36 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:51.904 16:43:36 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:52.162 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:30.932 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:30.932 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:30.932 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:17:31.500 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:17:31.500 00:17:31.500 real 1m27.118s 00:17:31.500 user 1m24.274s 00:17:31.500 sys 1m57.420s 00:17:31.500 16:44:16 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:31.500 ************************************ 00:17:31.500 END TEST blockdev_xnvme 00:17:31.500 16:44:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:31.500 ************************************ 00:17:31.758 16:44:16 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:17:31.758 16:44:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:31.758 16:44:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:31.758 16:44:16 -- common/autotest_common.sh@10 -- # set +x 00:17:31.758 ************************************ 00:17:31.758 START TEST ublk 00:17:31.758 ************************************ 00:17:31.758 16:44:16 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:17:31.758 * Looking for test storage... 00:17:31.758 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:31.758 16:44:16 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:31.758 16:44:16 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:17:31.758 16:44:16 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:31.758 16:44:16 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:31.758 16:44:16 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:31.758 16:44:16 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:31.758 16:44:16 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:31.758 16:44:16 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:17:31.758 16:44:16 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:17:31.758 16:44:16 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:17:31.758 16:44:16 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:17:31.758 16:44:16 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:17:31.758 16:44:16 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:17:31.758 16:44:16 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:17:31.758 16:44:16 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:31.758 16:44:16 ublk -- scripts/common.sh@344 -- # case "$op" in 00:17:31.758 16:44:16 ublk -- scripts/common.sh@345 -- # : 1 00:17:31.758 16:44:16 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:31.758 16:44:16 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:31.758 16:44:16 ublk -- scripts/common.sh@365 -- # decimal 1 00:17:31.758 16:44:16 ublk -- scripts/common.sh@353 -- # local d=1 00:17:31.758 16:44:16 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:31.758 16:44:16 ublk -- scripts/common.sh@355 -- # echo 1 00:17:31.758 16:44:16 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:17:31.758 16:44:16 ublk -- scripts/common.sh@366 -- # decimal 2 00:17:31.758 16:44:16 ublk -- scripts/common.sh@353 -- # local d=2 00:17:31.758 16:44:16 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:31.758 16:44:16 ublk -- scripts/common.sh@355 -- # echo 2 00:17:31.758 16:44:16 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:17:31.758 16:44:16 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:31.758 16:44:16 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:31.758 16:44:16 ublk -- scripts/common.sh@368 -- # return 0 00:17:31.758 16:44:16 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:31.758 16:44:16 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:31.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.758 --rc genhtml_branch_coverage=1 00:17:31.758 --rc genhtml_function_coverage=1 00:17:31.758 --rc genhtml_legend=1 00:17:31.758 --rc geninfo_all_blocks=1 00:17:31.758 --rc geninfo_unexecuted_blocks=1 00:17:31.758 00:17:31.758 ' 00:17:31.758 16:44:16 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:31.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.758 --rc genhtml_branch_coverage=1 00:17:31.758 --rc genhtml_function_coverage=1 00:17:31.758 --rc genhtml_legend=1 00:17:31.758 --rc geninfo_all_blocks=1 00:17:31.758 --rc geninfo_unexecuted_blocks=1 00:17:31.758 00:17:31.758 ' 00:17:31.758 16:44:16 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:31.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.758 --rc genhtml_branch_coverage=1 00:17:31.758 --rc genhtml_function_coverage=1 00:17:31.758 --rc genhtml_legend=1 00:17:31.758 --rc geninfo_all_blocks=1 00:17:31.758 --rc geninfo_unexecuted_blocks=1 00:17:31.758 00:17:31.758 ' 00:17:31.758 16:44:16 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:31.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.758 --rc genhtml_branch_coverage=1 00:17:31.758 --rc genhtml_function_coverage=1 00:17:31.758 --rc genhtml_legend=1 00:17:31.758 --rc geninfo_all_blocks=1 00:17:31.758 --rc geninfo_unexecuted_blocks=1 00:17:31.758 00:17:31.758 ' 00:17:31.758 16:44:16 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:31.758 16:44:16 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:31.758 16:44:16 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:31.758 16:44:16 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:31.758 16:44:16 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:31.758 16:44:16 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:31.758 16:44:16 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:31.758 16:44:16 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:31.758 16:44:16 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:31.758 16:44:16 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:17:31.758 16:44:16 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:17:31.758 16:44:16 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:17:31.758 16:44:16 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:17:31.758 16:44:16 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:17:31.758 16:44:16 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:17:31.758 16:44:16 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:17:31.758 16:44:16 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:17:31.758 16:44:16 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:17:31.758 16:44:16 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:17:31.758 16:44:16 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:17:31.759 16:44:16 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:31.759 16:44:16 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:31.759 16:44:16 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:31.759 ************************************ 00:17:31.759 START TEST test_save_ublk_config 00:17:31.759 ************************************ 00:17:31.759 16:44:16 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:17:31.759 16:44:16 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:17:31.759 16:44:16 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73499 00:17:31.759 16:44:16 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:17:31.759 16:44:16 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:17:31.759 16:44:16 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73499 00:17:31.759 16:44:16 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73499 ']' 00:17:31.759 16:44:16 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.759 16:44:16 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:31.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.759 16:44:16 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.759 16:44:16 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:31.759 16:44:16 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:31.759 [2024-11-20 16:44:16.641634] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:17:31.759 [2024-11-20 16:44:16.642077] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73499 ] 00:17:32.016 [2024-11-20 16:44:16.798790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.274 [2024-11-20 16:44:16.902093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.839 16:44:17 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:32.839 16:44:17 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:17:32.839 16:44:17 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:17:32.839 16:44:17 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:17:32.839 16:44:17 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.839 16:44:17 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:32.839 [2024-11-20 16:44:17.524428] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:32.839 [2024-11-20 16:44:17.525717] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:32.839 malloc0 00:17:32.839 [2024-11-20 16:44:17.612526] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:17:32.839 [2024-11-20 16:44:17.612613] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:17:32.839 [2024-11-20 16:44:17.612624] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:32.839 [2024-11-20 16:44:17.612632] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:32.839 [2024-11-20 16:44:17.621472] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:32.839 [2024-11-20 16:44:17.621496] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:32.839 [2024-11-20 16:44:17.628402] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:32.839 [2024-11-20 16:44:17.628520] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:32.839 [2024-11-20 16:44:17.645402] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:32.839 0 00:17:32.840 16:44:17 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.840 16:44:17 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:17:32.840 16:44:17 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.840 16:44:17 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:33.098 16:44:17 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.098 16:44:17 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:17:33.098 "subsystems": [ 00:17:33.098 { 00:17:33.098 "subsystem": "fsdev", 00:17:33.098 "config": [ 00:17:33.098 { 00:17:33.098 "method": "fsdev_set_opts", 00:17:33.098 "params": { 00:17:33.098 "fsdev_io_pool_size": 65535, 00:17:33.098 "fsdev_io_cache_size": 256 00:17:33.098 } 00:17:33.098 } 00:17:33.098 ] 00:17:33.098 }, 00:17:33.098 { 00:17:33.098 "subsystem": "keyring", 00:17:33.098 "config": [] 00:17:33.098 }, 00:17:33.098 { 00:17:33.098 "subsystem": "iobuf", 00:17:33.098 "config": [ 00:17:33.098 { 00:17:33.098 "method": "iobuf_set_options", 00:17:33.098 "params": { 00:17:33.098 "small_pool_count": 8192, 00:17:33.098 "large_pool_count": 1024, 00:17:33.098 "small_bufsize": 8192, 00:17:33.098 "large_bufsize": 135168, 00:17:33.098 "enable_numa": false 00:17:33.098 } 00:17:33.098 } 00:17:33.098 ] 00:17:33.098 }, 00:17:33.098 { 00:17:33.098 "subsystem": "sock", 00:17:33.098 "config": [ 00:17:33.098 { 00:17:33.098 "method": "sock_set_default_impl", 00:17:33.098 "params": { 00:17:33.098 "impl_name": "posix" 00:17:33.098 } 00:17:33.098 }, 00:17:33.098 { 00:17:33.098 "method": "sock_impl_set_options", 00:17:33.098 "params": { 00:17:33.098 "impl_name": "ssl", 00:17:33.098 "recv_buf_size": 4096, 00:17:33.098 "send_buf_size": 4096, 00:17:33.098 "enable_recv_pipe": true, 00:17:33.098 "enable_quickack": false, 00:17:33.098 "enable_placement_id": 0, 00:17:33.098 "enable_zerocopy_send_server": true, 00:17:33.098 "enable_zerocopy_send_client": false, 00:17:33.098 "zerocopy_threshold": 0, 00:17:33.098 "tls_version": 0, 00:17:33.098 "enable_ktls": false 00:17:33.098 } 00:17:33.098 }, 00:17:33.098 { 00:17:33.098 "method": "sock_impl_set_options", 00:17:33.098 "params": { 00:17:33.098 "impl_name": "posix", 00:17:33.098 "recv_buf_size": 2097152, 00:17:33.098 "send_buf_size": 2097152, 00:17:33.098 "enable_recv_pipe": true, 00:17:33.098 "enable_quickack": false, 00:17:33.098 "enable_placement_id": 0, 00:17:33.098 "enable_zerocopy_send_server": true, 00:17:33.098 "enable_zerocopy_send_client": false, 00:17:33.098 "zerocopy_threshold": 0, 00:17:33.098 "tls_version": 0, 00:17:33.098 "enable_ktls": false 00:17:33.098 } 00:17:33.098 } 00:17:33.098 ] 00:17:33.098 }, 00:17:33.098 { 00:17:33.098 "subsystem": "vmd", 00:17:33.098 "config": [] 00:17:33.098 }, 00:17:33.098 { 00:17:33.098 "subsystem": "accel", 00:17:33.098 "config": [ 00:17:33.098 { 00:17:33.098 "method": "accel_set_options", 00:17:33.098 "params": { 00:17:33.098 "small_cache_size": 128, 00:17:33.098 "large_cache_size": 16, 00:17:33.098 "task_count": 2048, 00:17:33.098 "sequence_count": 2048, 00:17:33.098 "buf_count": 2048 00:17:33.098 } 00:17:33.098 } 00:17:33.098 ] 00:17:33.098 }, 00:17:33.098 { 00:17:33.098 "subsystem": "bdev", 00:17:33.098 "config": [ 00:17:33.098 { 00:17:33.098 "method": "bdev_set_options", 00:17:33.098 "params": { 00:17:33.098 "bdev_io_pool_size": 65535, 00:17:33.098 "bdev_io_cache_size": 256, 00:17:33.098 "bdev_auto_examine": true, 00:17:33.098 "iobuf_small_cache_size": 128, 00:17:33.098 "iobuf_large_cache_size": 16 00:17:33.098 } 00:17:33.098 }, 00:17:33.098 { 00:17:33.098 "method": "bdev_raid_set_options", 00:17:33.098 "params": { 00:17:33.098 "process_window_size_kb": 1024, 00:17:33.098 "process_max_bandwidth_mb_sec": 0 00:17:33.098 } 00:17:33.098 }, 00:17:33.098 { 00:17:33.098 "method": "bdev_iscsi_set_options", 00:17:33.098 "params": { 00:17:33.098 "timeout_sec": 30 00:17:33.098 } 00:17:33.098 }, 00:17:33.098 { 00:17:33.098 "method": "bdev_nvme_set_options", 00:17:33.098 "params": { 00:17:33.098 "action_on_timeout": "none", 00:17:33.098 "timeout_us": 0, 00:17:33.098 "timeout_admin_us": 0, 00:17:33.098 "keep_alive_timeout_ms": 10000, 00:17:33.098 "arbitration_burst": 0, 00:17:33.098 "low_priority_weight": 0, 00:17:33.098 "medium_priority_weight": 0, 00:17:33.098 "high_priority_weight": 0, 00:17:33.098 "nvme_adminq_poll_period_us": 10000, 00:17:33.098 "nvme_ioq_poll_period_us": 0, 00:17:33.098 "io_queue_requests": 0, 00:17:33.098 "delay_cmd_submit": true, 00:17:33.098 "transport_retry_count": 4, 00:17:33.098 "bdev_retry_count": 3, 00:17:33.098 "transport_ack_timeout": 0, 00:17:33.098 "ctrlr_loss_timeout_sec": 0, 00:17:33.098 "reconnect_delay_sec": 0, 00:17:33.098 "fast_io_fail_timeout_sec": 0, 00:17:33.098 "disable_auto_failback": false, 00:17:33.098 "generate_uuids": false, 00:17:33.098 "transport_tos": 0, 00:17:33.098 "nvme_error_stat": false, 00:17:33.098 "rdma_srq_size": 0, 00:17:33.098 "io_path_stat": false, 00:17:33.098 "allow_accel_sequence": false, 00:17:33.098 "rdma_max_cq_size": 0, 00:17:33.098 "rdma_cm_event_timeout_ms": 0, 00:17:33.098 "dhchap_digests": [ 00:17:33.098 "sha256", 00:17:33.098 "sha384", 00:17:33.098 "sha512" 00:17:33.098 ], 00:17:33.098 "dhchap_dhgroups": [ 00:17:33.098 "null", 00:17:33.098 "ffdhe2048", 00:17:33.098 "ffdhe3072", 00:17:33.098 "ffdhe4096", 00:17:33.098 "ffdhe6144", 00:17:33.098 "ffdhe8192" 00:17:33.098 ] 00:17:33.098 } 00:17:33.098 }, 00:17:33.098 { 00:17:33.098 "method": "bdev_nvme_set_hotplug", 00:17:33.098 "params": { 00:17:33.098 "period_us": 100000, 00:17:33.098 "enable": false 00:17:33.098 } 00:17:33.098 }, 00:17:33.098 { 00:17:33.098 "method": "bdev_malloc_create", 00:17:33.098 "params": { 00:17:33.098 "name": "malloc0", 00:17:33.098 "num_blocks": 8192, 00:17:33.098 "block_size": 4096, 00:17:33.098 "physical_block_size": 4096, 00:17:33.098 "uuid": "97886e3e-6bb9-4f8a-b331-00e5ff10e7f2", 00:17:33.098 "optimal_io_boundary": 0, 00:17:33.098 "md_size": 0, 00:17:33.098 "dif_type": 0, 00:17:33.098 "dif_is_head_of_md": false, 00:17:33.098 "dif_pi_format": 0 00:17:33.098 } 00:17:33.098 }, 00:17:33.098 { 00:17:33.098 "method": "bdev_wait_for_examine" 00:17:33.098 } 00:17:33.099 ] 00:17:33.099 }, 00:17:33.099 { 00:17:33.099 "subsystem": "scsi", 00:17:33.099 "config": null 00:17:33.099 }, 00:17:33.099 { 00:17:33.099 "subsystem": "scheduler", 00:17:33.099 "config": [ 00:17:33.099 { 00:17:33.099 "method": "framework_set_scheduler", 00:17:33.099 "params": { 00:17:33.099 "name": "static" 00:17:33.099 } 00:17:33.099 } 00:17:33.099 ] 00:17:33.099 }, 00:17:33.099 { 00:17:33.099 "subsystem": "vhost_scsi", 00:17:33.099 "config": [] 00:17:33.099 }, 00:17:33.099 { 00:17:33.099 "subsystem": "vhost_blk", 00:17:33.099 "config": [] 00:17:33.099 }, 00:17:33.099 { 00:17:33.099 "subsystem": "ublk", 00:17:33.099 "config": [ 00:17:33.099 { 00:17:33.099 "method": "ublk_create_target", 00:17:33.099 "params": { 00:17:33.099 "cpumask": "1" 00:17:33.099 } 00:17:33.099 }, 00:17:33.099 { 00:17:33.099 "method": "ublk_start_disk", 00:17:33.099 "params": { 00:17:33.099 "bdev_name": "malloc0", 00:17:33.099 "ublk_id": 0, 00:17:33.099 "num_queues": 1, 00:17:33.099 "queue_depth": 128 00:17:33.099 } 00:17:33.099 } 00:17:33.099 ] 00:17:33.099 }, 00:17:33.099 { 00:17:33.099 "subsystem": "nbd", 00:17:33.099 "config": [] 00:17:33.099 }, 00:17:33.099 { 00:17:33.099 "subsystem": "nvmf", 00:17:33.099 "config": [ 00:17:33.099 { 00:17:33.099 "method": "nvmf_set_config", 00:17:33.099 "params": { 00:17:33.099 "discovery_filter": "match_any", 00:17:33.099 "admin_cmd_passthru": { 00:17:33.099 "identify_ctrlr": false 00:17:33.099 }, 00:17:33.099 "dhchap_digests": [ 00:17:33.099 "sha256", 00:17:33.099 "sha384", 00:17:33.099 "sha512" 00:17:33.099 ], 00:17:33.099 "dhchap_dhgroups": [ 00:17:33.099 "null", 00:17:33.099 "ffdhe2048", 00:17:33.099 "ffdhe3072", 00:17:33.099 "ffdhe4096", 00:17:33.099 "ffdhe6144", 00:17:33.099 "ffdhe8192" 00:17:33.099 ] 00:17:33.099 } 00:17:33.099 }, 00:17:33.099 { 00:17:33.099 "method": "nvmf_set_max_subsystems", 00:17:33.099 "params": { 00:17:33.099 "max_subsystems": 1024 00:17:33.099 } 00:17:33.099 }, 00:17:33.099 { 00:17:33.099 "method": "nvmf_set_crdt", 00:17:33.099 "params": { 00:17:33.099 "crdt1": 0, 00:17:33.099 "crdt2": 0, 00:17:33.099 "crdt3": 0 00:17:33.099 } 00:17:33.099 } 00:17:33.099 ] 00:17:33.099 }, 00:17:33.099 { 00:17:33.099 "subsystem": "iscsi", 00:17:33.099 "config": [ 00:17:33.099 { 00:17:33.099 "method": "iscsi_set_options", 00:17:33.099 "params": { 00:17:33.099 "node_base": "iqn.2016-06.io.spdk", 00:17:33.099 "max_sessions": 128, 00:17:33.099 "max_connections_per_session": 2, 00:17:33.099 "max_queue_depth": 64, 00:17:33.099 "default_time2wait": 2, 00:17:33.099 "default_time2retain": 20, 00:17:33.099 "first_burst_length": 8192, 00:17:33.099 "immediate_data": true, 00:17:33.099 "allow_duplicated_isid": false, 00:17:33.099 "error_recovery_level": 0, 00:17:33.099 "nop_timeout": 60, 00:17:33.099 "nop_in_interval": 30, 00:17:33.099 "disable_chap": false, 00:17:33.099 "require_chap": false, 00:17:33.099 "mutual_chap": false, 00:17:33.099 "chap_group": 0, 00:17:33.099 "max_large_datain_per_connection": 64, 00:17:33.099 "max_r2t_per_connection": 4, 00:17:33.099 "pdu_pool_size": 36864, 00:17:33.099 "immediate_data_pool_size": 16384, 00:17:33.099 "data_out_pool_size": 2048 00:17:33.099 } 00:17:33.099 } 00:17:33.099 ] 00:17:33.099 } 00:17:33.099 ] 00:17:33.099 }' 00:17:33.099 16:44:17 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73499 00:17:33.099 16:44:17 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73499 ']' 00:17:33.099 16:44:17 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73499 00:17:33.099 16:44:17 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:17:33.099 16:44:17 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:33.099 16:44:17 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73499 00:17:33.099 16:44:17 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:33.099 16:44:17 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:33.099 killing process with pid 73499 00:17:33.099 16:44:17 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73499' 00:17:33.099 16:44:17 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73499 00:17:33.099 16:44:17 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73499 00:17:34.474 [2024-11-20 16:44:19.018175] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:34.474 [2024-11-20 16:44:19.052412] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:34.474 [2024-11-20 16:44:19.052582] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:34.474 [2024-11-20 16:44:19.061430] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:34.474 [2024-11-20 16:44:19.061484] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:34.474 [2024-11-20 16:44:19.061496] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:34.474 [2024-11-20 16:44:19.061520] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:34.474 [2024-11-20 16:44:19.061666] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:35.848 16:44:20 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73555 00:17:35.848 16:44:20 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73555 00:17:35.848 16:44:20 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73555 ']' 00:17:35.848 16:44:20 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.848 16:44:20 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:35.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.848 16:44:20 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.848 16:44:20 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:17:35.848 16:44:20 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:35.848 16:44:20 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:35.848 16:44:20 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:17:35.848 "subsystems": [ 00:17:35.848 { 00:17:35.848 "subsystem": "fsdev", 00:17:35.848 "config": [ 00:17:35.848 { 00:17:35.848 "method": "fsdev_set_opts", 00:17:35.848 "params": { 00:17:35.848 "fsdev_io_pool_size": 65535, 00:17:35.848 "fsdev_io_cache_size": 256 00:17:35.848 } 00:17:35.848 } 00:17:35.848 ] 00:17:35.848 }, 00:17:35.848 { 00:17:35.848 "subsystem": "keyring", 00:17:35.848 "config": [] 00:17:35.848 }, 00:17:35.848 { 00:17:35.848 "subsystem": "iobuf", 00:17:35.848 "config": [ 00:17:35.848 { 00:17:35.848 "method": "iobuf_set_options", 00:17:35.848 "params": { 00:17:35.848 "small_pool_count": 8192, 00:17:35.848 "large_pool_count": 1024, 00:17:35.848 "small_bufsize": 8192, 00:17:35.849 "large_bufsize": 135168, 00:17:35.849 "enable_numa": false 00:17:35.849 } 00:17:35.849 } 00:17:35.849 ] 00:17:35.849 }, 00:17:35.849 { 00:17:35.849 "subsystem": "sock", 00:17:35.849 "config": [ 00:17:35.849 { 00:17:35.849 "method": "sock_set_default_impl", 00:17:35.849 "params": { 00:17:35.849 "impl_name": "posix" 00:17:35.849 } 00:17:35.849 }, 00:17:35.849 { 00:17:35.849 "method": "sock_impl_set_options", 00:17:35.849 "params": { 00:17:35.849 "impl_name": "ssl", 00:17:35.849 "recv_buf_size": 4096, 00:17:35.849 "send_buf_size": 4096, 00:17:35.849 "enable_recv_pipe": true, 00:17:35.849 "enable_quickack": false, 00:17:35.849 "enable_placement_id": 0, 00:17:35.849 "enable_zerocopy_send_server": true, 00:17:35.849 "enable_zerocopy_send_client": false, 00:17:35.849 "zerocopy_threshold": 0, 00:17:35.849 "tls_version": 0, 00:17:35.849 "enable_ktls": false 00:17:35.849 } 00:17:35.849 }, 00:17:35.849 { 00:17:35.849 "method": "sock_impl_set_options", 00:17:35.849 "params": { 00:17:35.849 "impl_name": "posix", 00:17:35.849 "recv_buf_size": 2097152, 00:17:35.849 "send_buf_size": 2097152, 00:17:35.849 "enable_recv_pipe": true, 00:17:35.849 "enable_quickack": false, 00:17:35.849 "enable_placement_id": 0, 00:17:35.849 "enable_zerocopy_send_server": true, 00:17:35.849 "enable_zerocopy_send_client": false, 00:17:35.849 "zerocopy_threshold": 0, 00:17:35.849 "tls_version": 0, 00:17:35.849 "enable_ktls": false 00:17:35.849 } 00:17:35.849 } 00:17:35.849 ] 00:17:35.849 }, 00:17:35.849 { 00:17:35.849 "subsystem": "vmd", 00:17:35.849 "config": [] 00:17:35.849 }, 00:17:35.849 { 00:17:35.849 "subsystem": "accel", 00:17:35.849 "config": [ 00:17:35.849 { 00:17:35.849 "method": "accel_set_options", 00:17:35.849 "params": { 00:17:35.849 "small_cache_size": 128, 00:17:35.849 "large_cache_size": 16, 00:17:35.849 "task_count": 2048, 00:17:35.849 "sequence_count": 2048, 00:17:35.849 "buf_count": 2048 00:17:35.849 } 00:17:35.849 } 00:17:35.849 ] 00:17:35.849 }, 00:17:35.849 { 00:17:35.849 "subsystem": "bdev", 00:17:35.849 "config": [ 00:17:35.849 { 00:17:35.849 "method": "bdev_set_options", 00:17:35.849 "params": { 00:17:35.849 "bdev_io_pool_size": 65535, 00:17:35.849 "bdev_io_cache_size": 256, 00:17:35.849 "bdev_auto_examine": true, 00:17:35.849 "iobuf_small_cache_size": 128, 00:17:35.849 "iobuf_large_cache_size": 16 00:17:35.849 } 00:17:35.849 }, 00:17:35.849 { 00:17:35.849 "method": "bdev_raid_set_options", 00:17:35.849 "params": { 00:17:35.849 "process_window_size_kb": 1024, 00:17:35.849 "process_max_bandwidth_mb_sec": 0 00:17:35.849 } 00:17:35.849 }, 00:17:35.849 { 00:17:35.849 "method": "bdev_iscsi_set_options", 00:17:35.849 "params": { 00:17:35.849 "timeout_sec": 30 00:17:35.849 } 00:17:35.849 }, 00:17:35.849 { 00:17:35.849 "method": "bdev_nvme_set_options", 00:17:35.849 "params": { 00:17:35.849 "action_on_timeout": "none", 00:17:35.849 "timeout_us": 0, 00:17:35.849 "timeout_admin_us": 0, 00:17:35.849 "keep_alive_timeout_ms": 10000, 00:17:35.849 "arbitration_burst": 0, 00:17:35.849 "low_priority_weight": 0, 00:17:35.849 "medium_priority_weight": 0, 00:17:35.849 "high_priority_weight": 0, 00:17:35.849 "nvme_adminq_poll_period_us": 10000, 00:17:35.849 "nvme_ioq_poll_period_us": 0, 00:17:35.849 "io_queue_requests": 0, 00:17:35.849 "delay_cmd_submit": true, 00:17:35.849 "transport_retry_count": 4, 00:17:35.849 "bdev_retry_count": 3, 00:17:35.849 "transport_ack_timeout": 0, 00:17:35.849 "ctrlr_loss_timeout_sec": 0, 00:17:35.849 "reconnect_delay_sec": 0, 00:17:35.849 "fast_io_fail_timeout_sec": 0, 00:17:35.849 "disable_auto_failback": false, 00:17:35.849 "generate_uuids": false, 00:17:35.849 "transport_tos": 0, 00:17:35.849 "nvme_error_stat": false, 00:17:35.849 "rdma_srq_size": 0, 00:17:35.849 "io_path_stat": false, 00:17:35.849 "allow_accel_sequence": false, 00:17:35.849 "rdma_max_cq_size": 0, 00:17:35.849 "rdma_cm_event_timeout_ms": 0, 00:17:35.849 "dhchap_digests": [ 00:17:35.849 "sha256", 00:17:35.849 "sha384", 00:17:35.849 "sha512" 00:17:35.849 ], 00:17:35.849 "dhchap_dhgroups": [ 00:17:35.849 "null", 00:17:35.849 "ffdhe2048", 00:17:35.849 "ffdhe3072", 00:17:35.849 "ffdhe4096", 00:17:35.849 "ffdhe6144", 00:17:35.849 "ffdhe8192" 00:17:35.849 ] 00:17:35.849 } 00:17:35.849 }, 00:17:35.849 { 00:17:35.849 "method": "bdev_nvme_set_hotplug", 00:17:35.849 "params": { 00:17:35.849 "period_us": 100000, 00:17:35.849 "enable": false 00:17:35.849 } 00:17:35.849 }, 00:17:35.849 { 00:17:35.849 "method": "bdev_malloc_create", 00:17:35.849 "params": { 00:17:35.849 "name": "malloc0", 00:17:35.849 "num_blocks": 8192, 00:17:35.849 "block_size": 4096, 00:17:35.849 "physical_block_size": 4096, 00:17:35.849 "uuid": "97886e3e-6bb9-4f8a-b331-00e5ff10e7f2", 00:17:35.849 "optimal_io_boundary": 0, 00:17:35.849 "md_size": 0, 00:17:35.849 "dif_type": 0, 00:17:35.849 "dif_is_head_of_md": false, 00:17:35.849 "dif_pi_format": 0 00:17:35.849 } 00:17:35.849 }, 00:17:35.849 { 00:17:35.849 "method": "bdev_wait_for_examine" 00:17:35.849 } 00:17:35.849 ] 00:17:35.849 }, 00:17:35.849 { 00:17:35.849 "subsystem": "scsi", 00:17:35.849 "config": null 00:17:35.849 }, 00:17:35.849 { 00:17:35.849 "subsystem": "scheduler", 00:17:35.849 "config": [ 00:17:35.849 { 00:17:35.849 "method": "framework_set_scheduler", 00:17:35.849 "params": { 00:17:35.849 "name": "static" 00:17:35.849 } 00:17:35.849 } 00:17:35.849 ] 00:17:35.849 }, 00:17:35.849 { 00:17:35.849 "subsystem": "vhost_scsi", 00:17:35.849 "config": [] 00:17:35.849 }, 00:17:35.849 { 00:17:35.849 "subsystem": "vhost_blk", 00:17:35.849 "config": [] 00:17:35.849 }, 00:17:35.849 { 00:17:35.849 "subsystem": "ublk", 00:17:35.849 "config": [ 00:17:35.849 { 00:17:35.849 "method": "ublk_create_target", 00:17:35.849 "params": { 00:17:35.849 "cpumask": "1" 00:17:35.849 } 00:17:35.849 }, 00:17:35.849 { 00:17:35.849 "method": "ublk_start_disk", 00:17:35.849 "params": { 00:17:35.849 "bdev_name": "malloc0", 00:17:35.849 "ublk_id": 0, 00:17:35.849 "num_queues": 1, 00:17:35.849 "queue_depth": 128 00:17:35.849 } 00:17:35.849 } 00:17:35.849 ] 00:17:35.849 }, 00:17:35.849 { 00:17:35.849 "subsystem": "nbd", 00:17:35.849 "config": [] 00:17:35.849 }, 00:17:35.849 { 00:17:35.849 "subsystem": "nvmf", 00:17:35.849 "config": [ 00:17:35.849 { 00:17:35.849 "method": "nvmf_set_config", 00:17:35.849 "params": { 00:17:35.849 "discovery_filter": "match_any", 00:17:35.849 "admin_cmd_passthru": { 00:17:35.849 "identify_ctrlr": false 00:17:35.849 }, 00:17:35.849 "dhchap_digests": [ 00:17:35.849 "sha256", 00:17:35.849 "sha384", 00:17:35.850 "sha512" 00:17:35.850 ], 00:17:35.850 "dhchap_dhgroups": [ 00:17:35.850 "null", 00:17:35.850 "ffdhe2048", 00:17:35.850 "ffdhe3072", 00:17:35.850 "ffdhe4096", 00:17:35.850 "ffdhe6144", 00:17:35.850 "ffdhe8192" 00:17:35.850 ] 00:17:35.850 } 00:17:35.850 }, 00:17:35.850 { 00:17:35.850 "method": "nvmf_set_max_subsystems", 00:17:35.850 "params": { 00:17:35.850 "max_subsystems": 1024 00:17:35.850 } 00:17:35.850 }, 00:17:35.850 { 00:17:35.850 "method": "nvmf_set_crdt", 00:17:35.850 "params": { 00:17:35.850 "crdt1": 0, 00:17:35.850 "crdt2": 0, 00:17:35.850 "crdt3": 0 00:17:35.850 } 00:17:35.850 } 00:17:35.850 ] 00:17:35.850 }, 00:17:35.850 { 00:17:35.850 "subsystem": "iscsi", 00:17:35.850 "config": [ 00:17:35.850 { 00:17:35.850 "method": "iscsi_set_options", 00:17:35.850 "params": { 00:17:35.850 "node_base": "iqn.2016-06.io.spdk", 00:17:35.850 "max_sessions": 128, 00:17:35.850 "max_connections_per_session": 2, 00:17:35.850 "max_queue_depth": 64, 00:17:35.850 "default_time2wait": 2, 00:17:35.850 "default_time2retain": 20, 00:17:35.850 "first_burst_length": 8192, 00:17:35.850 "immediate_data": true, 00:17:35.850 "allow_duplicated_isid": false, 00:17:35.850 "error_recovery_level": 0, 00:17:35.850 "nop_timeout": 60, 00:17:35.850 "nop_in_interval": 30, 00:17:35.850 "disable_chap": false, 00:17:35.850 "require_chap": false, 00:17:35.850 "mutual_chap": false, 00:17:35.850 "chap_group": 0, 00:17:35.850 "max_large_datain_per_connection": 64, 00:17:35.850 "max_r2t_per_connection": 4, 00:17:35.850 "pdu_pool_size": 36864, 00:17:35.850 "immediate_data_pool_size": 16384, 00:17:35.850 "data_out_pool_size": 2048 00:17:35.850 } 00:17:35.850 } 00:17:35.850 ] 00:17:35.850 } 00:17:35.850 ] 00:17:35.850 }' 00:17:35.850 [2024-11-20 16:44:20.594643] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:17:35.850 [2024-11-20 16:44:20.594987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73555 ] 00:17:36.108 [2024-11-20 16:44:20.748670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.108 [2024-11-20 16:44:20.846177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.040 [2024-11-20 16:44:21.600397] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:37.040 [2024-11-20 16:44:21.601211] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:37.040 [2024-11-20 16:44:21.608502] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:17:37.041 [2024-11-20 16:44:21.608587] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:17:37.041 [2024-11-20 16:44:21.608596] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:37.041 [2024-11-20 16:44:21.608603] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:37.041 [2024-11-20 16:44:21.617461] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:37.041 [2024-11-20 16:44:21.617479] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:37.041 [2024-11-20 16:44:21.624409] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:37.041 [2024-11-20 16:44:21.624496] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:37.041 [2024-11-20 16:44:21.641402] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:37.041 16:44:21 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:37.041 16:44:21 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:17:37.041 16:44:21 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:17:37.041 16:44:21 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:17:37.041 16:44:21 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.041 16:44:21 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:37.041 16:44:21 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.041 16:44:21 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:37.041 16:44:21 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:17:37.041 16:44:21 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73555 00:17:37.041 16:44:21 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73555 ']' 00:17:37.041 16:44:21 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73555 00:17:37.041 16:44:21 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:17:37.041 16:44:21 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:37.041 16:44:21 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73555 00:17:37.041 16:44:21 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:37.041 16:44:21 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:37.041 killing process with pid 73555 00:17:37.041 16:44:21 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73555' 00:17:37.041 16:44:21 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73555 00:17:37.041 16:44:21 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73555 00:17:38.413 [2024-11-20 16:44:22.896254] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:38.413 [2024-11-20 16:44:22.938453] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:38.413 [2024-11-20 16:44:22.938602] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:38.413 [2024-11-20 16:44:22.945409] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:38.413 [2024-11-20 16:44:22.945460] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:38.413 [2024-11-20 16:44:22.945469] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:38.413 [2024-11-20 16:44:22.945495] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:38.413 [2024-11-20 16:44:22.945635] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:39.785 16:44:24 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:17:39.785 00:17:39.785 real 0m7.902s 00:17:39.785 user 0m5.344s 00:17:39.785 sys 0m3.129s 00:17:39.785 16:44:24 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:39.785 16:44:24 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:39.785 ************************************ 00:17:39.785 END TEST test_save_ublk_config 00:17:39.785 ************************************ 00:17:39.785 16:44:24 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73634 00:17:39.785 16:44:24 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:39.785 16:44:24 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73634 00:17:39.785 16:44:24 ublk -- common/autotest_common.sh@835 -- # '[' -z 73634 ']' 00:17:39.785 16:44:24 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:39.785 16:44:24 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.785 16:44:24 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:39.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.785 16:44:24 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.785 16:44:24 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:39.785 16:44:24 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:39.785 [2024-11-20 16:44:24.580845] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:17:39.785 [2024-11-20 16:44:24.580970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73634 ] 00:17:40.043 [2024-11-20 16:44:24.738510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:40.043 [2024-11-20 16:44:24.823284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.043 [2024-11-20 16:44:24.823404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.609 16:44:25 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:40.609 16:44:25 ublk -- common/autotest_common.sh@868 -- # return 0 00:17:40.609 16:44:25 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:17:40.609 16:44:25 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:40.609 16:44:25 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:40.609 16:44:25 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:40.609 ************************************ 00:17:40.609 START TEST test_create_ublk 00:17:40.609 ************************************ 00:17:40.609 16:44:25 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:17:40.609 16:44:25 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:17:40.609 16:44:25 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.609 16:44:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:40.609 [2024-11-20 16:44:25.435397] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:40.609 [2024-11-20 16:44:25.436955] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:40.609 16:44:25 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.609 16:44:25 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:17:40.609 16:44:25 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:17:40.609 16:44:25 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.609 16:44:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:40.867 16:44:25 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.867 16:44:25 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:17:40.867 16:44:25 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:40.867 16:44:25 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.867 16:44:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:40.867 [2024-11-20 16:44:25.587511] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:40.867 [2024-11-20 16:44:25.587818] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:40.867 [2024-11-20 16:44:25.587832] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:40.867 [2024-11-20 16:44:25.587839] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:40.867 [2024-11-20 16:44:25.596563] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:40.867 [2024-11-20 16:44:25.596583] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:40.867 [2024-11-20 16:44:25.603405] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:40.867 [2024-11-20 16:44:25.611441] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:40.867 [2024-11-20 16:44:25.633411] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:40.867 16:44:25 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.867 16:44:25 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:17:40.867 16:44:25 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:17:40.867 16:44:25 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:17:40.867 16:44:25 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.867 16:44:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:40.867 16:44:25 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.867 16:44:25 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:17:40.867 { 00:17:40.867 "ublk_device": "/dev/ublkb0", 00:17:40.867 "id": 0, 00:17:40.867 "queue_depth": 512, 00:17:40.867 "num_queues": 4, 00:17:40.867 "bdev_name": "Malloc0" 00:17:40.867 } 00:17:40.867 ]' 00:17:40.867 16:44:25 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:17:40.867 16:44:25 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:40.867 16:44:25 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:17:40.867 16:44:25 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:17:40.867 16:44:25 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:17:40.867 16:44:25 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:17:40.867 16:44:25 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:17:41.125 16:44:25 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:17:41.125 16:44:25 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:17:41.125 16:44:25 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:41.125 16:44:25 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:17:41.125 16:44:25 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:17:41.125 16:44:25 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:17:41.125 16:44:25 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:17:41.125 16:44:25 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:17:41.125 16:44:25 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:17:41.125 16:44:25 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:17:41.125 16:44:25 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:17:41.125 16:44:25 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:17:41.125 16:44:25 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:17:41.125 16:44:25 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:17:41.125 16:44:25 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:17:41.125 fio: verification read phase will never start because write phase uses all of runtime 00:17:41.125 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:17:41.125 fio-3.35 00:17:41.125 Starting 1 process 00:17:53.320 00:17:53.320 fio_test: (groupid=0, jobs=1): err= 0: pid=73674: Wed Nov 20 16:44:36 2024 00:17:53.320 write: IOPS=19.1k, BW=74.6MiB/s (78.2MB/s)(746MiB/10001msec); 0 zone resets 00:17:53.320 clat (usec): min=34, max=4038, avg=51.51, stdev=82.29 00:17:53.320 lat (usec): min=34, max=4038, avg=52.01, stdev=82.33 00:17:53.320 clat percentiles (usec): 00:17:53.320 | 1.00th=[ 40], 5.00th=[ 42], 10.00th=[ 43], 20.00th=[ 44], 00:17:53.320 | 30.00th=[ 45], 40.00th=[ 46], 50.00th=[ 47], 60.00th=[ 49], 00:17:53.320 | 70.00th=[ 50], 80.00th=[ 52], 90.00th=[ 58], 95.00th=[ 62], 00:17:53.320 | 99.00th=[ 72], 99.50th=[ 79], 99.90th=[ 1319], 99.95th=[ 2409], 00:17:53.320 | 99.99th=[ 3425] 00:17:53.320 bw ( KiB/s): min=72008, max=79864, per=100.00%, avg=76384.00, stdev=2230.53, samples=19 00:17:53.320 iops : min=18002, max=19966, avg=19096.00, stdev=557.63, samples=19 00:17:53.320 lat (usec) : 50=71.50%, 100=28.20%, 250=0.13%, 500=0.03%, 750=0.01% 00:17:53.320 lat (usec) : 1000=0.01% 00:17:53.320 lat (msec) : 2=0.05%, 4=0.07%, 10=0.01% 00:17:53.320 cpu : usr=3.79%, sys=16.76%, ctx=190882, majf=0, minf=795 00:17:53.320 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:53.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:53.320 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:53.320 issued rwts: total=0,190887,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:53.320 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:53.320 00:17:53.320 Run status group 0 (all jobs): 00:17:53.320 WRITE: bw=74.6MiB/s (78.2MB/s), 74.6MiB/s-74.6MiB/s (78.2MB/s-78.2MB/s), io=746MiB (782MB), run=10001-10001msec 00:17:53.320 00:17:53.320 Disk stats (read/write): 00:17:53.320 ublkb0: ios=0/188921, merge=0/0, ticks=0/7930, in_queue=7931, util=99.06% 00:17:53.320 16:44:36 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:17:53.320 16:44:36 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.320 16:44:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:53.320 [2024-11-20 16:44:36.042409] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:53.320 [2024-11-20 16:44:36.074935] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:53.320 [2024-11-20 16:44:36.075897] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:53.320 [2024-11-20 16:44:36.081409] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:53.320 [2024-11-20 16:44:36.081676] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:53.320 [2024-11-20 16:44:36.081693] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:53.320 16:44:36 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.320 16:44:36 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:17:53.320 16:44:36 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:17:53.320 16:44:36 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:17:53.320 16:44:36 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:53.320 16:44:36 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:53.320 16:44:36 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:53.320 16:44:36 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:53.320 16:44:36 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:17:53.320 16:44:36 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.320 16:44:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:53.320 [2024-11-20 16:44:36.096476] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:17:53.320 request: 00:17:53.320 { 00:17:53.320 "ublk_id": 0, 00:17:53.320 "method": "ublk_stop_disk", 00:17:53.320 "req_id": 1 00:17:53.320 } 00:17:53.320 Got JSON-RPC error response 00:17:53.320 response: 00:17:53.320 { 00:17:53.320 "code": -19, 00:17:53.320 "message": "No such device" 00:17:53.320 } 00:17:53.320 16:44:36 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:53.320 16:44:36 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:17:53.320 16:44:36 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:53.320 16:44:36 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:53.320 16:44:36 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:53.320 16:44:36 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:17:53.320 16:44:36 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.320 16:44:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:53.320 [2024-11-20 16:44:36.105475] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:53.320 [2024-11-20 16:44:36.109319] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:53.320 [2024-11-20 16:44:36.109361] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:53.320 16:44:36 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.320 16:44:36 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:53.320 16:44:36 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.320 16:44:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:53.320 16:44:36 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.320 16:44:36 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:17:53.320 16:44:36 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:53.320 16:44:36 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.320 16:44:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:53.320 16:44:36 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.320 16:44:36 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:53.320 16:44:36 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:17:53.320 16:44:36 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:53.320 16:44:36 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:53.320 16:44:36 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.320 16:44:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:53.320 16:44:36 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.320 16:44:36 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:53.320 16:44:36 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:17:53.320 16:44:36 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:53.320 00:17:53.320 real 0m11.225s 00:17:53.320 user 0m0.681s 00:17:53.320 sys 0m1.756s 00:17:53.320 16:44:36 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:53.320 ************************************ 00:17:53.320 END TEST test_create_ublk 00:17:53.320 16:44:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:53.320 ************************************ 00:17:53.320 16:44:36 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:17:53.320 16:44:36 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:53.321 16:44:36 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:53.321 16:44:36 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:53.321 ************************************ 00:17:53.321 START TEST test_create_multi_ublk 00:17:53.321 ************************************ 00:17:53.321 16:44:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:17:53.321 16:44:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:17:53.321 16:44:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.321 16:44:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:53.321 [2024-11-20 16:44:36.704389] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:53.321 [2024-11-20 16:44:36.706050] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:53.321 16:44:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.321 16:44:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:17:53.321 16:44:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:17:53.321 16:44:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:53.321 16:44:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:17:53.321 16:44:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.321 16:44:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:53.321 16:44:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.321 16:44:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:17:53.321 16:44:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:53.321 16:44:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.321 16:44:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:53.321 [2024-11-20 16:44:36.920524] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:53.321 [2024-11-20 16:44:36.920834] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:53.321 [2024-11-20 16:44:36.920847] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:53.321 [2024-11-20 16:44:36.920856] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:53.321 [2024-11-20 16:44:36.932455] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:53.321 [2024-11-20 16:44:36.932489] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:53.321 [2024-11-20 16:44:36.944399] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:53.321 [2024-11-20 16:44:36.944930] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:53.321 [2024-11-20 16:44:36.958397] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:53.321 16:44:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.321 16:44:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:17:53.321 16:44:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:53.321 16:44:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:17:53.321 16:44:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.321 16:44:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:53.321 [2024-11-20 16:44:37.167521] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:17:53.321 [2024-11-20 16:44:37.167835] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:17:53.321 [2024-11-20 16:44:37.167849] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:53.321 [2024-11-20 16:44:37.167855] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:53.321 [2024-11-20 16:44:37.176576] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:53.321 [2024-11-20 16:44:37.176596] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:53.321 [2024-11-20 16:44:37.183423] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:53.321 [2024-11-20 16:44:37.183939] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:53.321 [2024-11-20 16:44:37.192433] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:53.321 [2024-11-20 16:44:37.359489] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:17:53.321 [2024-11-20 16:44:37.359794] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:17:53.321 [2024-11-20 16:44:37.359806] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:17:53.321 [2024-11-20 16:44:37.359813] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:17:53.321 [2024-11-20 16:44:37.367421] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:53.321 [2024-11-20 16:44:37.367442] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:53.321 [2024-11-20 16:44:37.375408] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:53.321 [2024-11-20 16:44:37.375923] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:17:53.321 [2024-11-20 16:44:37.379187] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:53.321 [2024-11-20 16:44:37.536511] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:17:53.321 [2024-11-20 16:44:37.536815] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:17:53.321 [2024-11-20 16:44:37.536829] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:17:53.321 [2024-11-20 16:44:37.536834] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:17:53.321 [2024-11-20 16:44:37.544419] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:53.321 [2024-11-20 16:44:37.544437] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:53.321 [2024-11-20 16:44:37.552409] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:53.321 [2024-11-20 16:44:37.552934] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:17:53.321 [2024-11-20 16:44:37.557034] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:17:53.321 { 00:17:53.321 "ublk_device": "/dev/ublkb0", 00:17:53.321 "id": 0, 00:17:53.321 "queue_depth": 512, 00:17:53.321 "num_queues": 4, 00:17:53.321 "bdev_name": "Malloc0" 00:17:53.321 }, 00:17:53.321 { 00:17:53.321 "ublk_device": "/dev/ublkb1", 00:17:53.321 "id": 1, 00:17:53.321 "queue_depth": 512, 00:17:53.321 "num_queues": 4, 00:17:53.321 "bdev_name": "Malloc1" 00:17:53.321 }, 00:17:53.321 { 00:17:53.321 "ublk_device": "/dev/ublkb2", 00:17:53.321 "id": 2, 00:17:53.321 "queue_depth": 512, 00:17:53.321 "num_queues": 4, 00:17:53.321 "bdev_name": "Malloc2" 00:17:53.321 }, 00:17:53.321 { 00:17:53.321 "ublk_device": "/dev/ublkb3", 00:17:53.321 "id": 3, 00:17:53.321 "queue_depth": 512, 00:17:53.321 "num_queues": 4, 00:17:53.321 "bdev_name": "Malloc3" 00:17:53.321 } 00:17:53.321 ]' 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:53.321 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:17:53.322 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:53.322 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:17:53.322 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:53.322 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:53.322 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:17:53.322 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:17:53.322 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:17:53.322 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:17:53.322 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:17:53.322 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:53.322 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:17:53.322 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:53.322 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:17:53.322 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:17:53.322 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:53.322 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:17:53.322 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:17:53.322 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:17:53.322 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:17:53.322 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:17:53.322 16:44:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:53.322 16:44:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:17:53.322 16:44:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:53.322 16:44:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:17:53.322 16:44:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:17:53.322 16:44:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:53.322 16:44:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:17:53.322 16:44:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:17:53.322 16:44:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:17:53.322 16:44:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:17:53.322 16:44:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:17:53.322 16:44:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:53.322 16:44:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:17:53.322 16:44:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:53.322 16:44:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:17:53.580 16:44:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:17:53.580 16:44:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:17:53.580 16:44:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:17:53.580 16:44:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:53.580 16:44:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:17:53.580 16:44:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.580 16:44:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:53.580 [2024-11-20 16:44:38.244491] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:53.580 [2024-11-20 16:44:38.291438] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:53.580 [2024-11-20 16:44:38.292140] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:53.580 [2024-11-20 16:44:38.301429] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:53.580 [2024-11-20 16:44:38.301670] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:53.580 [2024-11-20 16:44:38.301684] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:53.580 16:44:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.580 16:44:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:53.580 16:44:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:17:53.580 16:44:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.580 16:44:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:53.580 [2024-11-20 16:44:38.316465] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:53.580 [2024-11-20 16:44:38.364430] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:53.580 [2024-11-20 16:44:38.365120] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:53.580 [2024-11-20 16:44:38.376404] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:53.580 [2024-11-20 16:44:38.376637] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:53.580 [2024-11-20 16:44:38.376650] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:53.580 16:44:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.580 16:44:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:53.580 16:44:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:17:53.580 16:44:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.580 16:44:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:53.580 [2024-11-20 16:44:38.392479] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:17:53.580 [2024-11-20 16:44:38.424849] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:53.580 [2024-11-20 16:44:38.425843] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:17:53.580 [2024-11-20 16:44:38.431407] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:53.580 [2024-11-20 16:44:38.431630] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:17:53.580 [2024-11-20 16:44:38.431643] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:17:53.580 16:44:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.580 16:44:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:53.580 16:44:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:17:53.580 16:44:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.580 16:44:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:53.580 [2024-11-20 16:44:38.447478] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:17:53.838 [2024-11-20 16:44:38.479437] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:53.838 [2024-11-20 16:44:38.480044] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:17:53.838 [2024-11-20 16:44:38.487405] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:53.838 [2024-11-20 16:44:38.487635] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:17:53.838 [2024-11-20 16:44:38.487647] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:17:53.838 16:44:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.838 16:44:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:17:53.838 [2024-11-20 16:44:38.679459] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:53.838 [2024-11-20 16:44:38.683063] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:53.838 [2024-11-20 16:44:38.683095] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:53.838 16:44:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:17:53.838 16:44:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:53.839 16:44:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:53.839 16:44:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.839 16:44:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:54.414 16:44:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.414 16:44:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:54.414 16:44:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:54.414 16:44:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.414 16:44:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:54.678 16:44:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.678 16:44:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:54.678 16:44:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:17:54.678 16:44:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.678 16:44:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:54.937 16:44:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.937 16:44:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:54.937 16:44:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:17:54.937 16:44:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.937 16:44:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:54.937 16:44:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.937 16:44:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:17:54.937 16:44:39 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:54.937 16:44:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.937 16:44:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:55.237 16:44:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.237 16:44:39 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:55.237 16:44:39 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:17:55.237 16:44:39 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:55.237 16:44:39 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:55.237 16:44:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.237 16:44:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:55.237 16:44:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.237 16:44:39 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:55.237 16:44:39 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:17:55.237 16:44:39 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:55.237 00:17:55.237 real 0m3.209s 00:17:55.237 user 0m0.819s 00:17:55.237 sys 0m0.159s 00:17:55.237 16:44:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:55.237 ************************************ 00:17:55.237 END TEST test_create_multi_ublk 00:17:55.237 ************************************ 00:17:55.237 16:44:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:55.237 16:44:39 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:17:55.237 16:44:39 ublk -- ublk/ublk.sh@147 -- # cleanup 00:17:55.237 16:44:39 ublk -- ublk/ublk.sh@130 -- # killprocess 73634 00:17:55.237 16:44:39 ublk -- common/autotest_common.sh@954 -- # '[' -z 73634 ']' 00:17:55.237 16:44:39 ublk -- common/autotest_common.sh@958 -- # kill -0 73634 00:17:55.237 16:44:39 ublk -- common/autotest_common.sh@959 -- # uname 00:17:55.237 16:44:39 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:55.237 16:44:39 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73634 00:17:55.237 16:44:39 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:55.237 16:44:39 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:55.237 16:44:39 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73634' 00:17:55.237 killing process with pid 73634 00:17:55.237 16:44:39 ublk -- common/autotest_common.sh@973 -- # kill 73634 00:17:55.237 16:44:39 ublk -- common/autotest_common.sh@978 -- # wait 73634 00:17:55.806 [2024-11-20 16:44:40.498277] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:55.806 [2024-11-20 16:44:40.498330] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:56.372 00:17:56.372 real 0m24.740s 00:17:56.372 user 0m35.346s 00:17:56.372 sys 0m9.774s 00:17:56.372 16:44:41 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:56.372 16:44:41 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:56.372 ************************************ 00:17:56.372 END TEST ublk 00:17:56.372 ************************************ 00:17:56.372 16:44:41 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:56.372 16:44:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:56.372 16:44:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:56.372 16:44:41 -- common/autotest_common.sh@10 -- # set +x 00:17:56.372 ************************************ 00:17:56.372 START TEST ublk_recovery 00:17:56.372 ************************************ 00:17:56.372 16:44:41 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:56.372 * Looking for test storage... 00:17:56.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:56.630 16:44:41 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:56.630 16:44:41 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:17:56.630 16:44:41 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:56.630 16:44:41 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:56.630 16:44:41 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:56.630 16:44:41 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:56.630 16:44:41 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:56.630 16:44:41 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:17:56.630 16:44:41 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:17:56.630 16:44:41 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:17:56.630 16:44:41 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:17:56.630 16:44:41 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:17:56.630 16:44:41 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:17:56.630 16:44:41 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:17:56.630 16:44:41 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:56.630 16:44:41 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:17:56.630 16:44:41 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:17:56.630 16:44:41 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:56.630 16:44:41 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:56.630 16:44:41 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:17:56.630 16:44:41 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:17:56.630 16:44:41 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:56.630 16:44:41 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:17:56.630 16:44:41 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:17:56.630 16:44:41 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:17:56.630 16:44:41 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:17:56.630 16:44:41 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:56.630 16:44:41 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:17:56.630 16:44:41 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:17:56.630 16:44:41 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:56.630 16:44:41 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:56.630 16:44:41 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:17:56.630 16:44:41 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:56.630 16:44:41 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:56.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.630 --rc genhtml_branch_coverage=1 00:17:56.630 --rc genhtml_function_coverage=1 00:17:56.630 --rc genhtml_legend=1 00:17:56.630 --rc geninfo_all_blocks=1 00:17:56.630 --rc geninfo_unexecuted_blocks=1 00:17:56.630 00:17:56.630 ' 00:17:56.630 16:44:41 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:56.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.630 --rc genhtml_branch_coverage=1 00:17:56.630 --rc genhtml_function_coverage=1 00:17:56.630 --rc genhtml_legend=1 00:17:56.630 --rc geninfo_all_blocks=1 00:17:56.630 --rc geninfo_unexecuted_blocks=1 00:17:56.630 00:17:56.630 ' 00:17:56.630 16:44:41 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:56.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.630 --rc genhtml_branch_coverage=1 00:17:56.630 --rc genhtml_function_coverage=1 00:17:56.630 --rc genhtml_legend=1 00:17:56.630 --rc geninfo_all_blocks=1 00:17:56.630 --rc geninfo_unexecuted_blocks=1 00:17:56.630 00:17:56.630 ' 00:17:56.630 16:44:41 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:56.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.630 --rc genhtml_branch_coverage=1 00:17:56.630 --rc genhtml_function_coverage=1 00:17:56.630 --rc genhtml_legend=1 00:17:56.630 --rc geninfo_all_blocks=1 00:17:56.630 --rc geninfo_unexecuted_blocks=1 00:17:56.630 00:17:56.630 ' 00:17:56.631 16:44:41 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:56.631 16:44:41 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:56.631 16:44:41 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:56.631 16:44:41 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:56.631 16:44:41 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:56.631 16:44:41 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:56.631 16:44:41 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:56.631 16:44:41 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:56.631 16:44:41 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:56.631 16:44:41 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:17:56.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.631 16:44:41 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=74024 00:17:56.631 16:44:41 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:56.631 16:44:41 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 74024 00:17:56.631 16:44:41 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74024 ']' 00:17:56.631 16:44:41 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.631 16:44:41 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:56.631 16:44:41 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.631 16:44:41 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:56.631 16:44:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:56.631 16:44:41 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:56.631 [2024-11-20 16:44:41.394637] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:17:56.631 [2024-11-20 16:44:41.394737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74024 ] 00:17:56.888 [2024-11-20 16:44:41.544335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:56.888 [2024-11-20 16:44:41.629306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.888 [2024-11-20 16:44:41.629330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.454 16:44:42 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:57.454 16:44:42 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:57.454 16:44:42 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:17:57.454 16:44:42 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.454 16:44:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:57.454 [2024-11-20 16:44:42.240400] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:57.455 [2024-11-20 16:44:42.241982] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:57.455 16:44:42 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.455 16:44:42 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:57.455 16:44:42 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.455 16:44:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:57.455 malloc0 00:17:57.455 16:44:42 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.455 16:44:42 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:17:57.455 16:44:42 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.455 16:44:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:57.455 [2024-11-20 16:44:42.328529] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:17:57.455 [2024-11-20 16:44:42.328623] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:17:57.455 [2024-11-20 16:44:42.328632] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:57.455 [2024-11-20 16:44:42.328641] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:57.455 [2024-11-20 16:44:42.337479] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:57.455 [2024-11-20 16:44:42.337504] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:57.714 [2024-11-20 16:44:42.344406] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:57.714 [2024-11-20 16:44:42.344537] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:57.714 [2024-11-20 16:44:42.361407] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:57.714 1 00:17:57.714 16:44:42 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.714 16:44:42 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:17:58.649 16:44:43 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=74059 00:17:58.649 16:44:43 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:17:58.649 16:44:43 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:17:58.649 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:58.649 fio-3.35 00:17:58.649 Starting 1 process 00:18:03.911 16:44:48 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 74024 00:18:03.911 16:44:48 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:18:09.191 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 74024 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:18:09.191 16:44:53 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74169 00:18:09.191 16:44:53 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:09.191 16:44:53 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74169 00:18:09.191 16:44:53 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74169 ']' 00:18:09.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.191 16:44:53 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.191 16:44:53 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:09.191 16:44:53 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.191 16:44:53 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:09.191 16:44:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:09.191 16:44:53 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:09.191 [2024-11-20 16:44:53.466296] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:18:09.191 [2024-11-20 16:44:53.466448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74169 ] 00:18:09.191 [2024-11-20 16:44:53.627411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:09.191 [2024-11-20 16:44:53.732077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.191 [2024-11-20 16:44:53.732303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.756 16:44:54 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.756 16:44:54 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:18:09.756 16:44:54 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:18:09.756 16:44:54 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.756 16:44:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:09.756 [2024-11-20 16:44:54.346416] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:09.756 [2024-11-20 16:44:54.349453] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:09.756 16:44:54 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.756 16:44:54 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:18:09.756 16:44:54 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.756 16:44:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:09.756 malloc0 00:18:09.756 16:44:54 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.756 16:44:54 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:18:09.756 16:44:54 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.756 16:44:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:09.756 [2024-11-20 16:44:54.506532] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:18:09.756 [2024-11-20 16:44:54.506577] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:09.756 [2024-11-20 16:44:54.506587] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:09.756 [2024-11-20 16:44:54.514433] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:09.756 [2024-11-20 16:44:54.514459] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:18:09.756 [2024-11-20 16:44:54.514467] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:18:09.756 [2024-11-20 16:44:54.514541] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:18:09.756 1 00:18:09.756 16:44:54 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.756 16:44:54 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 74059 00:18:09.756 [2024-11-20 16:44:54.522406] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:18:09.756 [2024-11-20 16:44:54.529027] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:18:09.756 [2024-11-20 16:44:54.536595] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:18:09.756 [2024-11-20 16:44:54.536618] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:19:05.972 00:19:05.972 fio_test: (groupid=0, jobs=1): err= 0: pid=74062: Wed Nov 20 16:45:43 2024 00:19:05.972 read: IOPS=24.9k, BW=97.1MiB/s (102MB/s)(5825MiB/60002msec) 00:19:05.972 slat (nsec): min=991, max=339824, avg=5302.34, stdev=1925.23 00:19:05.972 clat (usec): min=773, max=6170.8k, avg=2510.44, stdev=38795.08 00:19:05.972 lat (usec): min=813, max=6170.8k, avg=2515.75, stdev=38795.08 00:19:05.972 clat percentiles (usec): 00:19:05.972 | 1.00th=[ 1729], 5.00th=[ 1860], 10.00th=[ 1893], 20.00th=[ 1942], 00:19:05.972 | 30.00th=[ 1975], 40.00th=[ 2040], 50.00th=[ 2114], 60.00th=[ 2212], 00:19:05.972 | 70.00th=[ 2343], 80.00th=[ 2409], 90.00th=[ 2540], 95.00th=[ 3261], 00:19:05.972 | 99.00th=[ 5014], 99.50th=[ 5735], 99.90th=[ 6849], 99.95th=[ 7308], 00:19:05.972 | 99.99th=[12780] 00:19:05.972 bw ( KiB/s): min= 4184, max=127440, per=100.00%, avg=109463.35, stdev=16566.14, samples=108 00:19:05.972 iops : min= 1046, max=31860, avg=27365.82, stdev=4141.54, samples=108 00:19:05.972 write: IOPS=24.8k, BW=97.0MiB/s (102MB/s)(5818MiB/60002msec); 0 zone resets 00:19:05.972 slat (nsec): min=1034, max=1087.3k, avg=5421.78, stdev=2087.07 00:19:05.972 clat (usec): min=649, max=6170.9k, avg=2630.92, stdev=41977.93 00:19:05.972 lat (usec): min=662, max=6170.9k, avg=2636.34, stdev=41977.92 00:19:05.972 clat percentiles (usec): 00:19:05.972 | 1.00th=[ 1778], 5.00th=[ 1942], 10.00th=[ 1975], 20.00th=[ 2024], 00:19:05.972 | 30.00th=[ 2073], 40.00th=[ 2147], 50.00th=[ 2212], 60.00th=[ 2278], 00:19:05.972 | 70.00th=[ 2442], 80.00th=[ 2507], 90.00th=[ 2638], 95.00th=[ 3228], 00:19:05.972 | 99.00th=[ 5014], 99.50th=[ 5800], 99.90th=[ 6849], 99.95th=[ 7308], 00:19:05.972 | 99.99th=[13042] 00:19:05.972 bw ( KiB/s): min= 4024, max=125592, per=100.00%, avg=109337.10, stdev=16627.99, samples=108 00:19:05.972 iops : min= 1006, max=31398, avg=27334.27, stdev=4157.00, samples=108 00:19:05.972 lat (usec) : 750=0.01%, 1000=0.01% 00:19:05.972 lat (msec) : 2=24.14%, 4=73.04%, 10=2.80%, 20=0.01%, >=2000=0.01% 00:19:05.972 cpu : usr=6.23%, sys=27.26%, ctx=101299, majf=0, minf=13 00:19:05.972 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:19:05.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:05.972 issued rwts: total=1491286,1489506,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.972 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:05.972 00:19:05.972 Run status group 0 (all jobs): 00:19:05.972 READ: bw=97.1MiB/s (102MB/s), 97.1MiB/s-97.1MiB/s (102MB/s-102MB/s), io=5825MiB (6108MB), run=60002-60002msec 00:19:05.972 WRITE: bw=97.0MiB/s (102MB/s), 97.0MiB/s-97.0MiB/s (102MB/s-102MB/s), io=5818MiB (6101MB), run=60002-60002msec 00:19:05.972 00:19:05.972 Disk stats (read/write): 00:19:05.972 ublkb1: ios=1488132/1486317, merge=0/0, ticks=3645265/3695459, in_queue=7340725, util=99.91% 00:19:05.972 16:45:43 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:19:05.972 16:45:43 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.972 16:45:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:05.972 [2024-11-20 16:45:43.629956] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:19:05.972 [2024-11-20 16:45:43.662542] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:05.972 [2024-11-20 16:45:43.662692] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:19:05.972 [2024-11-20 16:45:43.670407] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:05.972 [2024-11-20 16:45:43.670505] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:19:05.972 [2024-11-20 16:45:43.670514] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:19:05.972 16:45:43 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.972 16:45:43 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:19:05.972 16:45:43 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.972 16:45:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:05.972 [2024-11-20 16:45:43.684525] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:05.972 [2024-11-20 16:45:43.688299] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:05.972 [2024-11-20 16:45:43.688340] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:05.972 16:45:43 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.972 16:45:43 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:19:05.972 16:45:43 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:19:05.972 16:45:43 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74169 00:19:05.972 16:45:43 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 74169 ']' 00:19:05.972 16:45:43 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 74169 00:19:05.972 16:45:43 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:19:05.972 16:45:43 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:05.972 16:45:43 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74169 00:19:05.972 killing process with pid 74169 00:19:05.972 16:45:43 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:05.972 16:45:43 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:05.972 16:45:43 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74169' 00:19:05.972 16:45:43 ublk_recovery -- common/autotest_common.sh@973 -- # kill 74169 00:19:05.972 16:45:43 ublk_recovery -- common/autotest_common.sh@978 -- # wait 74169 00:19:05.973 [2024-11-20 16:45:44.773027] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:05.973 [2024-11-20 16:45:44.773071] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:05.973 00:19:05.973 real 1m4.295s 00:19:05.973 user 1m42.326s 00:19:05.973 sys 0m35.675s 00:19:05.973 16:45:45 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.973 16:45:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:05.973 ************************************ 00:19:05.973 END TEST ublk_recovery 00:19:05.973 ************************************ 00:19:05.973 16:45:45 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:19:05.973 16:45:45 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:05.973 16:45:45 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:05.973 16:45:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:05.973 16:45:45 -- common/autotest_common.sh@10 -- # set +x 00:19:05.973 16:45:45 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:05.973 16:45:45 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:19:05.973 16:45:45 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:19:05.973 16:45:45 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:05.973 16:45:45 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:05.973 16:45:45 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:19:05.973 16:45:45 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:19:05.973 16:45:45 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:05.973 16:45:45 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:05.973 16:45:45 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:19:05.973 16:45:45 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:05.973 16:45:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:05.973 16:45:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.973 16:45:45 -- common/autotest_common.sh@10 -- # set +x 00:19:05.973 ************************************ 00:19:05.973 START TEST ftl 00:19:05.973 ************************************ 00:19:05.973 16:45:45 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:05.973 * Looking for test storage... 00:19:05.973 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:05.973 16:45:45 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:05.973 16:45:45 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:19:05.973 16:45:45 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:05.973 16:45:45 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:05.973 16:45:45 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:05.973 16:45:45 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:05.973 16:45:45 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:05.973 16:45:45 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:19:05.973 16:45:45 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:19:05.973 16:45:45 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:19:05.973 16:45:45 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:19:05.973 16:45:45 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:19:05.973 16:45:45 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:19:05.973 16:45:45 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:19:05.973 16:45:45 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:05.973 16:45:45 ftl -- scripts/common.sh@344 -- # case "$op" in 00:19:05.973 16:45:45 ftl -- scripts/common.sh@345 -- # : 1 00:19:05.973 16:45:45 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:05.973 16:45:45 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:05.973 16:45:45 ftl -- scripts/common.sh@365 -- # decimal 1 00:19:05.973 16:45:45 ftl -- scripts/common.sh@353 -- # local d=1 00:19:05.973 16:45:45 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:05.973 16:45:45 ftl -- scripts/common.sh@355 -- # echo 1 00:19:05.973 16:45:45 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:19:05.973 16:45:45 ftl -- scripts/common.sh@366 -- # decimal 2 00:19:05.973 16:45:45 ftl -- scripts/common.sh@353 -- # local d=2 00:19:05.973 16:45:45 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:05.973 16:45:45 ftl -- scripts/common.sh@355 -- # echo 2 00:19:05.973 16:45:45 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:19:05.973 16:45:45 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:05.973 16:45:45 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:05.973 16:45:45 ftl -- scripts/common.sh@368 -- # return 0 00:19:05.973 16:45:45 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:05.973 16:45:45 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:05.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.973 --rc genhtml_branch_coverage=1 00:19:05.973 --rc genhtml_function_coverage=1 00:19:05.973 --rc genhtml_legend=1 00:19:05.973 --rc geninfo_all_blocks=1 00:19:05.973 --rc geninfo_unexecuted_blocks=1 00:19:05.973 00:19:05.973 ' 00:19:05.973 16:45:45 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:05.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.973 --rc genhtml_branch_coverage=1 00:19:05.973 --rc genhtml_function_coverage=1 00:19:05.973 --rc genhtml_legend=1 00:19:05.973 --rc geninfo_all_blocks=1 00:19:05.973 --rc geninfo_unexecuted_blocks=1 00:19:05.973 00:19:05.973 ' 00:19:05.973 16:45:45 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:05.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.973 --rc genhtml_branch_coverage=1 00:19:05.973 --rc genhtml_function_coverage=1 00:19:05.973 --rc genhtml_legend=1 00:19:05.973 --rc geninfo_all_blocks=1 00:19:05.973 --rc geninfo_unexecuted_blocks=1 00:19:05.973 00:19:05.973 ' 00:19:05.973 16:45:45 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:05.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.973 --rc genhtml_branch_coverage=1 00:19:05.973 --rc genhtml_function_coverage=1 00:19:05.973 --rc genhtml_legend=1 00:19:05.973 --rc geninfo_all_blocks=1 00:19:05.973 --rc geninfo_unexecuted_blocks=1 00:19:05.973 00:19:05.973 ' 00:19:05.973 16:45:45 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:05.973 16:45:45 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:05.973 16:45:45 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:05.973 16:45:45 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:05.973 16:45:45 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:05.973 16:45:45 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:05.973 16:45:45 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:05.973 16:45:45 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:05.973 16:45:45 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:05.973 16:45:45 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:05.973 16:45:45 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:05.973 16:45:45 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:05.973 16:45:45 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:05.973 16:45:45 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:05.973 16:45:45 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:05.973 16:45:45 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:05.973 16:45:45 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:05.973 16:45:45 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:05.973 16:45:45 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:05.973 16:45:45 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:05.973 16:45:45 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:05.973 16:45:45 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:05.973 16:45:45 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:05.973 16:45:45 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:05.973 16:45:45 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:05.973 16:45:45 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:05.973 16:45:45 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:05.973 16:45:45 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:05.973 16:45:45 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:05.973 16:45:45 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:05.973 16:45:45 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:19:05.973 16:45:45 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:19:05.973 16:45:45 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:19:05.973 16:45:45 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:19:05.973 16:45:45 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:05.973 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:05.973 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:05.973 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:05.973 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:05.973 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:05.973 16:45:46 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:19:05.973 16:45:46 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=74969 00:19:05.973 16:45:46 ftl -- ftl/ftl.sh@38 -- # waitforlisten 74969 00:19:05.973 16:45:46 ftl -- common/autotest_common.sh@835 -- # '[' -z 74969 ']' 00:19:05.973 16:45:46 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.973 16:45:46 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.973 16:45:46 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.973 16:45:46 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.973 16:45:46 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:05.973 [2024-11-20 16:45:46.297306] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:19:05.974 [2024-11-20 16:45:46.297445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74969 ] 00:19:05.974 [2024-11-20 16:45:46.456244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.974 [2024-11-20 16:45:46.557438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.974 16:45:47 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:05.974 16:45:47 ftl -- common/autotest_common.sh@868 -- # return 0 00:19:05.974 16:45:47 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:19:05.974 16:45:47 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:19:05.974 16:45:48 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:19:05.974 16:45:48 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:05.974 16:45:48 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:19:05.974 16:45:48 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:05.974 16:45:48 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:05.974 16:45:48 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:19:05.974 16:45:48 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:19:05.974 16:45:48 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:19:05.974 16:45:48 ftl -- ftl/ftl.sh@50 -- # break 00:19:05.974 16:45:48 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:19:05.974 16:45:48 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:19:05.974 16:45:48 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:05.974 16:45:48 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:05.974 16:45:48 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:19:05.974 16:45:48 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:19:05.974 16:45:48 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:19:05.974 16:45:48 ftl -- ftl/ftl.sh@63 -- # break 00:19:05.974 16:45:48 ftl -- ftl/ftl.sh@66 -- # killprocess 74969 00:19:05.974 16:45:48 ftl -- common/autotest_common.sh@954 -- # '[' -z 74969 ']' 00:19:05.974 16:45:48 ftl -- common/autotest_common.sh@958 -- # kill -0 74969 00:19:05.974 16:45:48 ftl -- common/autotest_common.sh@959 -- # uname 00:19:05.974 16:45:48 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:05.974 16:45:48 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74969 00:19:05.974 killing process with pid 74969 00:19:05.974 16:45:48 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:05.974 16:45:48 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:05.974 16:45:48 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74969' 00:19:05.974 16:45:48 ftl -- common/autotest_common.sh@973 -- # kill 74969 00:19:05.974 16:45:48 ftl -- common/autotest_common.sh@978 -- # wait 74969 00:19:05.974 16:45:50 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:19:05.974 16:45:50 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:05.974 16:45:50 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:05.974 16:45:50 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.974 16:45:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:05.974 ************************************ 00:19:05.974 START TEST ftl_fio_basic 00:19:05.974 ************************************ 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:05.974 * Looking for test storage... 00:19:05.974 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:05.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.974 --rc genhtml_branch_coverage=1 00:19:05.974 --rc genhtml_function_coverage=1 00:19:05.974 --rc genhtml_legend=1 00:19:05.974 --rc geninfo_all_blocks=1 00:19:05.974 --rc geninfo_unexecuted_blocks=1 00:19:05.974 00:19:05.974 ' 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:05.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.974 --rc genhtml_branch_coverage=1 00:19:05.974 --rc genhtml_function_coverage=1 00:19:05.974 --rc genhtml_legend=1 00:19:05.974 --rc geninfo_all_blocks=1 00:19:05.974 --rc geninfo_unexecuted_blocks=1 00:19:05.974 00:19:05.974 ' 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:05.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.974 --rc genhtml_branch_coverage=1 00:19:05.974 --rc genhtml_function_coverage=1 00:19:05.974 --rc genhtml_legend=1 00:19:05.974 --rc geninfo_all_blocks=1 00:19:05.974 --rc geninfo_unexecuted_blocks=1 00:19:05.974 00:19:05.974 ' 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:05.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.974 --rc genhtml_branch_coverage=1 00:19:05.974 --rc genhtml_function_coverage=1 00:19:05.974 --rc genhtml_legend=1 00:19:05.974 --rc geninfo_all_blocks=1 00:19:05.974 --rc geninfo_unexecuted_blocks=1 00:19:05.974 00:19:05.974 ' 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:05.974 16:45:50 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:05.975 16:45:50 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:05.975 16:45:50 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:05.975 16:45:50 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:05.975 16:45:50 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:05.975 16:45:50 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:05.975 16:45:50 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:19:05.975 16:45:50 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:05.975 16:45:50 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:19:05.975 16:45:50 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:19:05.975 16:45:50 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:05.975 16:45:50 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:19:05.975 16:45:50 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:19:05.975 16:45:50 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:05.975 16:45:50 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:19:05.975 16:45:50 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:19:05.975 16:45:50 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:19:05.975 16:45:50 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:19:05.975 16:45:50 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:19:05.975 16:45:50 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:19:05.975 16:45:50 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:05.975 16:45:50 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:05.975 16:45:50 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:05.975 16:45:50 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75101 00:19:05.975 16:45:50 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75101 00:19:05.975 16:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 75101 ']' 00:19:05.975 16:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.975 16:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.975 16:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.975 16:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.975 16:45:50 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:19:05.975 16:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:05.975 [2024-11-20 16:45:50.712349] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:19:05.975 [2024-11-20 16:45:50.712581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75101 ] 00:19:06.233 [2024-11-20 16:45:50.873452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:06.233 [2024-11-20 16:45:50.975635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.233 [2024-11-20 16:45:50.975727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:06.233 [2024-11-20 16:45:50.975971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.797 16:45:51 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.797 16:45:51 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:19:06.797 16:45:51 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:06.797 16:45:51 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:19:06.797 16:45:51 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:06.797 16:45:51 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:19:06.797 16:45:51 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:19:06.797 16:45:51 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:07.055 16:45:51 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:07.055 16:45:51 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:19:07.055 16:45:51 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:07.055 16:45:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:07.055 16:45:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:07.055 16:45:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:07.055 16:45:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:07.055 16:45:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:07.313 16:45:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:07.313 { 00:19:07.313 "name": "nvme0n1", 00:19:07.313 "aliases": [ 00:19:07.313 "fb865f60-b3aa-4c10-ac61-04c62def377d" 00:19:07.314 ], 00:19:07.314 "product_name": "NVMe disk", 00:19:07.314 "block_size": 4096, 00:19:07.314 "num_blocks": 1310720, 00:19:07.314 "uuid": "fb865f60-b3aa-4c10-ac61-04c62def377d", 00:19:07.314 "numa_id": -1, 00:19:07.314 "assigned_rate_limits": { 00:19:07.314 "rw_ios_per_sec": 0, 00:19:07.314 "rw_mbytes_per_sec": 0, 00:19:07.314 "r_mbytes_per_sec": 0, 00:19:07.314 "w_mbytes_per_sec": 0 00:19:07.314 }, 00:19:07.314 "claimed": false, 00:19:07.314 "zoned": false, 00:19:07.314 "supported_io_types": { 00:19:07.314 "read": true, 00:19:07.314 "write": true, 00:19:07.314 "unmap": true, 00:19:07.314 "flush": true, 00:19:07.314 "reset": true, 00:19:07.314 "nvme_admin": true, 00:19:07.314 "nvme_io": true, 00:19:07.314 "nvme_io_md": false, 00:19:07.314 "write_zeroes": true, 00:19:07.314 "zcopy": false, 00:19:07.314 "get_zone_info": false, 00:19:07.314 "zone_management": false, 00:19:07.314 "zone_append": false, 00:19:07.314 "compare": true, 00:19:07.314 "compare_and_write": false, 00:19:07.314 "abort": true, 00:19:07.314 "seek_hole": false, 00:19:07.314 "seek_data": false, 00:19:07.314 "copy": true, 00:19:07.314 "nvme_iov_md": false 00:19:07.314 }, 00:19:07.314 "driver_specific": { 00:19:07.314 "nvme": [ 00:19:07.314 { 00:19:07.314 "pci_address": "0000:00:11.0", 00:19:07.314 "trid": { 00:19:07.314 "trtype": "PCIe", 00:19:07.314 "traddr": "0000:00:11.0" 00:19:07.314 }, 00:19:07.314 "ctrlr_data": { 00:19:07.314 "cntlid": 0, 00:19:07.314 "vendor_id": "0x1b36", 00:19:07.314 "model_number": "QEMU NVMe Ctrl", 00:19:07.314 "serial_number": "12341", 00:19:07.314 "firmware_revision": "8.0.0", 00:19:07.314 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:07.314 "oacs": { 00:19:07.314 "security": 0, 00:19:07.314 "format": 1, 00:19:07.314 "firmware": 0, 00:19:07.314 "ns_manage": 1 00:19:07.314 }, 00:19:07.314 "multi_ctrlr": false, 00:19:07.314 "ana_reporting": false 00:19:07.314 }, 00:19:07.314 "vs": { 00:19:07.314 "nvme_version": "1.4" 00:19:07.314 }, 00:19:07.314 "ns_data": { 00:19:07.314 "id": 1, 00:19:07.314 "can_share": false 00:19:07.314 } 00:19:07.314 } 00:19:07.314 ], 00:19:07.314 "mp_policy": "active_passive" 00:19:07.314 } 00:19:07.314 } 00:19:07.314 ]' 00:19:07.314 16:45:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:07.314 16:45:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:07.314 16:45:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:07.314 16:45:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:07.314 16:45:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:07.314 16:45:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:19:07.314 16:45:52 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:19:07.314 16:45:52 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:07.314 16:45:52 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:19:07.314 16:45:52 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:07.314 16:45:52 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:07.572 16:45:52 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:19:07.572 16:45:52 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:07.830 16:45:52 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=6e26e57b-f396-40c5-b509-2f169b4689b2 00:19:07.830 16:45:52 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6e26e57b-f396-40c5-b509-2f169b4689b2 00:19:07.830 16:45:52 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=99aada00-ac94-477d-9794-1c16cdd143b6 00:19:07.830 16:45:52 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 99aada00-ac94-477d-9794-1c16cdd143b6 00:19:07.830 16:45:52 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:19:07.830 16:45:52 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:07.830 16:45:52 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=99aada00-ac94-477d-9794-1c16cdd143b6 00:19:07.830 16:45:52 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:19:07.830 16:45:52 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 99aada00-ac94-477d-9794-1c16cdd143b6 00:19:07.830 16:45:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=99aada00-ac94-477d-9794-1c16cdd143b6 00:19:07.830 16:45:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:07.830 16:45:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:07.830 16:45:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:07.830 16:45:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 99aada00-ac94-477d-9794-1c16cdd143b6 00:19:08.088 16:45:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:08.088 { 00:19:08.088 "name": "99aada00-ac94-477d-9794-1c16cdd143b6", 00:19:08.088 "aliases": [ 00:19:08.088 "lvs/nvme0n1p0" 00:19:08.088 ], 00:19:08.088 "product_name": "Logical Volume", 00:19:08.088 "block_size": 4096, 00:19:08.088 "num_blocks": 26476544, 00:19:08.088 "uuid": "99aada00-ac94-477d-9794-1c16cdd143b6", 00:19:08.088 "assigned_rate_limits": { 00:19:08.088 "rw_ios_per_sec": 0, 00:19:08.088 "rw_mbytes_per_sec": 0, 00:19:08.088 "r_mbytes_per_sec": 0, 00:19:08.088 "w_mbytes_per_sec": 0 00:19:08.088 }, 00:19:08.088 "claimed": false, 00:19:08.088 "zoned": false, 00:19:08.088 "supported_io_types": { 00:19:08.088 "read": true, 00:19:08.088 "write": true, 00:19:08.088 "unmap": true, 00:19:08.088 "flush": false, 00:19:08.088 "reset": true, 00:19:08.088 "nvme_admin": false, 00:19:08.088 "nvme_io": false, 00:19:08.088 "nvme_io_md": false, 00:19:08.088 "write_zeroes": true, 00:19:08.088 "zcopy": false, 00:19:08.088 "get_zone_info": false, 00:19:08.088 "zone_management": false, 00:19:08.088 "zone_append": false, 00:19:08.088 "compare": false, 00:19:08.088 "compare_and_write": false, 00:19:08.088 "abort": false, 00:19:08.088 "seek_hole": true, 00:19:08.088 "seek_data": true, 00:19:08.088 "copy": false, 00:19:08.088 "nvme_iov_md": false 00:19:08.088 }, 00:19:08.088 "driver_specific": { 00:19:08.088 "lvol": { 00:19:08.088 "lvol_store_uuid": "6e26e57b-f396-40c5-b509-2f169b4689b2", 00:19:08.088 "base_bdev": "nvme0n1", 00:19:08.088 "thin_provision": true, 00:19:08.088 "num_allocated_clusters": 0, 00:19:08.088 "snapshot": false, 00:19:08.088 "clone": false, 00:19:08.088 "esnap_clone": false 00:19:08.088 } 00:19:08.088 } 00:19:08.088 } 00:19:08.088 ]' 00:19:08.088 16:45:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:08.088 16:45:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:08.088 16:45:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:08.088 16:45:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:08.088 16:45:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:08.088 16:45:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:19:08.088 16:45:52 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:19:08.088 16:45:52 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:19:08.089 16:45:52 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:08.347 16:45:53 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:08.347 16:45:53 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:08.347 16:45:53 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 99aada00-ac94-477d-9794-1c16cdd143b6 00:19:08.347 16:45:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=99aada00-ac94-477d-9794-1c16cdd143b6 00:19:08.347 16:45:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:08.347 16:45:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:08.347 16:45:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:08.347 16:45:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 99aada00-ac94-477d-9794-1c16cdd143b6 00:19:08.605 16:45:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:08.605 { 00:19:08.605 "name": "99aada00-ac94-477d-9794-1c16cdd143b6", 00:19:08.605 "aliases": [ 00:19:08.605 "lvs/nvme0n1p0" 00:19:08.605 ], 00:19:08.605 "product_name": "Logical Volume", 00:19:08.605 "block_size": 4096, 00:19:08.605 "num_blocks": 26476544, 00:19:08.605 "uuid": "99aada00-ac94-477d-9794-1c16cdd143b6", 00:19:08.605 "assigned_rate_limits": { 00:19:08.605 "rw_ios_per_sec": 0, 00:19:08.605 "rw_mbytes_per_sec": 0, 00:19:08.605 "r_mbytes_per_sec": 0, 00:19:08.605 "w_mbytes_per_sec": 0 00:19:08.605 }, 00:19:08.605 "claimed": false, 00:19:08.605 "zoned": false, 00:19:08.605 "supported_io_types": { 00:19:08.605 "read": true, 00:19:08.605 "write": true, 00:19:08.605 "unmap": true, 00:19:08.605 "flush": false, 00:19:08.605 "reset": true, 00:19:08.605 "nvme_admin": false, 00:19:08.605 "nvme_io": false, 00:19:08.605 "nvme_io_md": false, 00:19:08.605 "write_zeroes": true, 00:19:08.605 "zcopy": false, 00:19:08.605 "get_zone_info": false, 00:19:08.606 "zone_management": false, 00:19:08.606 "zone_append": false, 00:19:08.606 "compare": false, 00:19:08.606 "compare_and_write": false, 00:19:08.606 "abort": false, 00:19:08.606 "seek_hole": true, 00:19:08.606 "seek_data": true, 00:19:08.606 "copy": false, 00:19:08.606 "nvme_iov_md": false 00:19:08.606 }, 00:19:08.606 "driver_specific": { 00:19:08.606 "lvol": { 00:19:08.606 "lvol_store_uuid": "6e26e57b-f396-40c5-b509-2f169b4689b2", 00:19:08.606 "base_bdev": "nvme0n1", 00:19:08.606 "thin_provision": true, 00:19:08.606 "num_allocated_clusters": 0, 00:19:08.606 "snapshot": false, 00:19:08.606 "clone": false, 00:19:08.606 "esnap_clone": false 00:19:08.606 } 00:19:08.606 } 00:19:08.606 } 00:19:08.606 ]' 00:19:08.606 16:45:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:08.606 16:45:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:08.606 16:45:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:08.606 16:45:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:08.606 16:45:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:08.606 16:45:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:19:08.606 16:45:53 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:19:08.606 16:45:53 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:08.864 16:45:53 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:19:08.864 16:45:53 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:19:08.864 16:45:53 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:19:08.864 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:19:08.864 16:45:53 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 99aada00-ac94-477d-9794-1c16cdd143b6 00:19:08.864 16:45:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=99aada00-ac94-477d-9794-1c16cdd143b6 00:19:08.864 16:45:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:08.864 16:45:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:08.864 16:45:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:08.864 16:45:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 99aada00-ac94-477d-9794-1c16cdd143b6 00:19:09.122 16:45:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:09.122 { 00:19:09.122 "name": "99aada00-ac94-477d-9794-1c16cdd143b6", 00:19:09.122 "aliases": [ 00:19:09.122 "lvs/nvme0n1p0" 00:19:09.122 ], 00:19:09.122 "product_name": "Logical Volume", 00:19:09.122 "block_size": 4096, 00:19:09.122 "num_blocks": 26476544, 00:19:09.122 "uuid": "99aada00-ac94-477d-9794-1c16cdd143b6", 00:19:09.122 "assigned_rate_limits": { 00:19:09.122 "rw_ios_per_sec": 0, 00:19:09.122 "rw_mbytes_per_sec": 0, 00:19:09.122 "r_mbytes_per_sec": 0, 00:19:09.122 "w_mbytes_per_sec": 0 00:19:09.122 }, 00:19:09.122 "claimed": false, 00:19:09.122 "zoned": false, 00:19:09.122 "supported_io_types": { 00:19:09.122 "read": true, 00:19:09.122 "write": true, 00:19:09.122 "unmap": true, 00:19:09.122 "flush": false, 00:19:09.122 "reset": true, 00:19:09.122 "nvme_admin": false, 00:19:09.122 "nvme_io": false, 00:19:09.122 "nvme_io_md": false, 00:19:09.122 "write_zeroes": true, 00:19:09.122 "zcopy": false, 00:19:09.122 "get_zone_info": false, 00:19:09.122 "zone_management": false, 00:19:09.122 "zone_append": false, 00:19:09.122 "compare": false, 00:19:09.122 "compare_and_write": false, 00:19:09.122 "abort": false, 00:19:09.122 "seek_hole": true, 00:19:09.122 "seek_data": true, 00:19:09.122 "copy": false, 00:19:09.122 "nvme_iov_md": false 00:19:09.122 }, 00:19:09.122 "driver_specific": { 00:19:09.122 "lvol": { 00:19:09.122 "lvol_store_uuid": "6e26e57b-f396-40c5-b509-2f169b4689b2", 00:19:09.122 "base_bdev": "nvme0n1", 00:19:09.122 "thin_provision": true, 00:19:09.122 "num_allocated_clusters": 0, 00:19:09.122 "snapshot": false, 00:19:09.122 "clone": false, 00:19:09.122 "esnap_clone": false 00:19:09.122 } 00:19:09.122 } 00:19:09.122 } 00:19:09.122 ]' 00:19:09.122 16:45:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:09.122 16:45:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:09.122 16:45:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:09.122 16:45:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:09.122 16:45:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:09.122 16:45:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:19:09.122 16:45:53 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:19:09.122 16:45:53 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:19:09.122 16:45:53 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 99aada00-ac94-477d-9794-1c16cdd143b6 -c nvc0n1p0 --l2p_dram_limit 60 00:19:09.381 [2024-11-20 16:45:54.108398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.381 [2024-11-20 16:45:54.108437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:09.381 [2024-11-20 16:45:54.108449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:09.382 [2024-11-20 16:45:54.108456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.382 [2024-11-20 16:45:54.108508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.382 [2024-11-20 16:45:54.108518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:09.382 [2024-11-20 16:45:54.108526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:09.382 [2024-11-20 16:45:54.108532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.382 [2024-11-20 16:45:54.108563] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:09.382 [2024-11-20 16:45:54.109141] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:09.382 [2024-11-20 16:45:54.109165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.382 [2024-11-20 16:45:54.109172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:09.382 [2024-11-20 16:45:54.109181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.613 ms 00:19:09.382 [2024-11-20 16:45:54.109187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.382 [2024-11-20 16:45:54.109288] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e1c84fdb-dcb4-448b-8dca-f6f047ea4df3 00:19:09.382 [2024-11-20 16:45:54.110300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.382 [2024-11-20 16:45:54.110431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:09.382 [2024-11-20 16:45:54.110445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:09.382 [2024-11-20 16:45:54.110453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.382 [2024-11-20 16:45:54.115277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.382 [2024-11-20 16:45:54.115368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:09.382 [2024-11-20 16:45:54.115428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.765 ms 00:19:09.382 [2024-11-20 16:45:54.115472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.382 [2024-11-20 16:45:54.115568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.382 [2024-11-20 16:45:54.115594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:09.382 [2024-11-20 16:45:54.115667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:09.382 [2024-11-20 16:45:54.115691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.382 [2024-11-20 16:45:54.115791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.382 [2024-11-20 16:45:54.115819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:09.382 [2024-11-20 16:45:54.115885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:09.382 [2024-11-20 16:45:54.115906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.382 [2024-11-20 16:45:54.115939] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:09.382 [2024-11-20 16:45:54.118917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.382 [2024-11-20 16:45:54.119016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:09.382 [2024-11-20 16:45:54.119076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.982 ms 00:19:09.382 [2024-11-20 16:45:54.119097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.382 [2024-11-20 16:45:54.119137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.382 [2024-11-20 16:45:54.119154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:09.382 [2024-11-20 16:45:54.119200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:09.382 [2024-11-20 16:45:54.119218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.382 [2024-11-20 16:45:54.119249] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:09.382 [2024-11-20 16:45:54.119390] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:09.382 [2024-11-20 16:45:54.119429] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:09.382 [2024-11-20 16:45:54.119481] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:09.382 [2024-11-20 16:45:54.119512] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:09.382 [2024-11-20 16:45:54.119536] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:09.382 [2024-11-20 16:45:54.119584] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:09.382 [2024-11-20 16:45:54.119601] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:09.382 [2024-11-20 16:45:54.119618] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:09.382 [2024-11-20 16:45:54.119633] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:09.382 [2024-11-20 16:45:54.119672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.382 [2024-11-20 16:45:54.119691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:09.382 [2024-11-20 16:45:54.119711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:19:09.382 [2024-11-20 16:45:54.119777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.382 [2024-11-20 16:45:54.119870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.382 [2024-11-20 16:45:54.119887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:09.382 [2024-11-20 16:45:54.119905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:09.382 [2024-11-20 16:45:54.119947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.382 [2024-11-20 16:45:54.120074] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:09.382 [2024-11-20 16:45:54.120176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:09.382 [2024-11-20 16:45:54.120199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:09.382 [2024-11-20 16:45:54.120216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:09.382 [2024-11-20 16:45:54.120233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:09.382 [2024-11-20 16:45:54.120248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:09.382 [2024-11-20 16:45:54.120265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:09.382 [2024-11-20 16:45:54.120279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:09.382 [2024-11-20 16:45:54.120325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:09.382 [2024-11-20 16:45:54.120343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:09.382 [2024-11-20 16:45:54.120360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:09.382 [2024-11-20 16:45:54.120376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:09.382 [2024-11-20 16:45:54.120401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:09.382 [2024-11-20 16:45:54.120416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:09.382 [2024-11-20 16:45:54.120433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:09.382 [2024-11-20 16:45:54.120471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:09.382 [2024-11-20 16:45:54.120516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:09.382 [2024-11-20 16:45:54.120535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:09.382 [2024-11-20 16:45:54.120580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:09.382 [2024-11-20 16:45:54.120599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:09.382 [2024-11-20 16:45:54.120615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:09.382 [2024-11-20 16:45:54.120630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:09.382 [2024-11-20 16:45:54.120646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:09.382 [2024-11-20 16:45:54.120661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:09.382 [2024-11-20 16:45:54.120707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:09.382 [2024-11-20 16:45:54.120725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:09.382 [2024-11-20 16:45:54.120741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:09.382 [2024-11-20 16:45:54.120756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:09.382 [2024-11-20 16:45:54.120772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:09.382 [2024-11-20 16:45:54.120786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:09.382 [2024-11-20 16:45:54.120802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:09.382 [2024-11-20 16:45:54.120846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:09.382 [2024-11-20 16:45:54.120867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:09.382 [2024-11-20 16:45:54.120882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:09.382 [2024-11-20 16:45:54.120898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:09.382 [2024-11-20 16:45:54.120925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:09.382 [2024-11-20 16:45:54.120941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:09.382 [2024-11-20 16:45:54.120956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:09.382 [2024-11-20 16:45:54.121001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:09.382 [2024-11-20 16:45:54.121018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:09.382 [2024-11-20 16:45:54.121035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:09.382 [2024-11-20 16:45:54.121050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:09.382 [2024-11-20 16:45:54.121067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:09.382 [2024-11-20 16:45:54.121082] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:09.382 [2024-11-20 16:45:54.121099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:09.382 [2024-11-20 16:45:54.121139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:09.382 [2024-11-20 16:45:54.121158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:09.382 [2024-11-20 16:45:54.121175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:09.383 [2024-11-20 16:45:54.121192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:09.383 [2024-11-20 16:45:54.121207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:09.383 [2024-11-20 16:45:54.121223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:09.383 [2024-11-20 16:45:54.121238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:09.383 [2024-11-20 16:45:54.121254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:09.383 [2024-11-20 16:45:54.121302] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:09.383 [2024-11-20 16:45:54.121332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:09.383 [2024-11-20 16:45:54.121357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:09.383 [2024-11-20 16:45:54.121390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:09.383 [2024-11-20 16:45:54.121416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:09.383 [2024-11-20 16:45:54.121441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:09.383 [2024-11-20 16:45:54.121495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:09.383 [2024-11-20 16:45:54.121521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:09.383 [2024-11-20 16:45:54.121544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:09.383 [2024-11-20 16:45:54.121568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:09.383 [2024-11-20 16:45:54.121594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:09.383 [2024-11-20 16:45:54.121620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:09.383 [2024-11-20 16:45:54.121676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:09.383 [2024-11-20 16:45:54.121703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:09.383 [2024-11-20 16:45:54.121727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:09.383 [2024-11-20 16:45:54.121752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:09.383 [2024-11-20 16:45:54.121775] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:09.383 [2024-11-20 16:45:54.121831] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:09.383 [2024-11-20 16:45:54.121858] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:09.383 [2024-11-20 16:45:54.121883] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:09.383 [2024-11-20 16:45:54.121906] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:09.383 [2024-11-20 16:45:54.121930] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:09.383 [2024-11-20 16:45:54.121981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.383 [2024-11-20 16:45:54.122002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:09.383 [2024-11-20 16:45:54.122019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.965 ms 00:19:09.383 [2024-11-20 16:45:54.122035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.383 [2024-11-20 16:45:54.122103] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:09.383 [2024-11-20 16:45:54.122165] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:13.560 [2024-11-20 16:45:57.959011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.560 [2024-11-20 16:45:57.959228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:13.560 [2024-11-20 16:45:57.959322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3836.894 ms 00:19:13.560 [2024-11-20 16:45:57.959351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.560 [2024-11-20 16:45:57.984483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.560 [2024-11-20 16:45:57.984636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:13.560 [2024-11-20 16:45:57.984712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.903 ms 00:19:13.560 [2024-11-20 16:45:57.984738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.560 [2024-11-20 16:45:57.984887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.560 [2024-11-20 16:45:57.984916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:13.560 [2024-11-20 16:45:57.984972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:13.561 [2024-11-20 16:45:57.984999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.561 [2024-11-20 16:45:58.027396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.561 [2024-11-20 16:45:58.027554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:13.561 [2024-11-20 16:45:58.027624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.340 ms 00:19:13.561 [2024-11-20 16:45:58.027651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.561 [2024-11-20 16:45:58.027739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.561 [2024-11-20 16:45:58.027767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:13.561 [2024-11-20 16:45:58.027788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:13.561 [2024-11-20 16:45:58.027841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.561 [2024-11-20 16:45:58.028292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.561 [2024-11-20 16:45:58.028425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:13.561 [2024-11-20 16:45:58.028496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:19:13.561 [2024-11-20 16:45:58.028539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.561 [2024-11-20 16:45:58.028870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.561 [2024-11-20 16:45:58.028968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:13.561 [2024-11-20 16:45:58.029044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:19:13.561 [2024-11-20 16:45:58.029114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.561 [2024-11-20 16:45:58.046178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.561 [2024-11-20 16:45:58.046288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:13.561 [2024-11-20 16:45:58.046340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.007 ms 00:19:13.561 [2024-11-20 16:45:58.046364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.561 [2024-11-20 16:45:58.058157] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:13.561 [2024-11-20 16:45:58.072304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.561 [2024-11-20 16:45:58.072357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:13.561 [2024-11-20 16:45:58.072372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.498 ms 00:19:13.561 [2024-11-20 16:45:58.072397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.561 [2024-11-20 16:45:58.126916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.561 [2024-11-20 16:45:58.126970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:13.561 [2024-11-20 16:45:58.126987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.476 ms 00:19:13.561 [2024-11-20 16:45:58.126996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.561 [2024-11-20 16:45:58.127216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.561 [2024-11-20 16:45:58.127233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:13.561 [2024-11-20 16:45:58.127246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:19:13.561 [2024-11-20 16:45:58.127254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.561 [2024-11-20 16:45:58.151045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.561 [2024-11-20 16:45:58.151093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:13.561 [2024-11-20 16:45:58.151108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.734 ms 00:19:13.561 [2024-11-20 16:45:58.151127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.561 [2024-11-20 16:45:58.173685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.561 [2024-11-20 16:45:58.173823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:13.561 [2024-11-20 16:45:58.173845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.506 ms 00:19:13.561 [2024-11-20 16:45:58.173852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.561 [2024-11-20 16:45:58.174441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.561 [2024-11-20 16:45:58.174458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:13.561 [2024-11-20 16:45:58.174468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:19:13.561 [2024-11-20 16:45:58.174475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.561 [2024-11-20 16:45:58.252254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.561 [2024-11-20 16:45:58.252309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:13.561 [2024-11-20 16:45:58.252330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.727 ms 00:19:13.561 [2024-11-20 16:45:58.252338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.561 [2024-11-20 16:45:58.277279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.561 [2024-11-20 16:45:58.277460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:13.561 [2024-11-20 16:45:58.277482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.821 ms 00:19:13.561 [2024-11-20 16:45:58.277491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.561 [2024-11-20 16:45:58.301209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.561 [2024-11-20 16:45:58.301349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:13.561 [2024-11-20 16:45:58.301369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.672 ms 00:19:13.561 [2024-11-20 16:45:58.301391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.561 [2024-11-20 16:45:58.324609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.561 [2024-11-20 16:45:58.324652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:13.561 [2024-11-20 16:45:58.324666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.178 ms 00:19:13.561 [2024-11-20 16:45:58.324674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.561 [2024-11-20 16:45:58.324726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.561 [2024-11-20 16:45:58.324735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:13.561 [2024-11-20 16:45:58.324750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:13.561 [2024-11-20 16:45:58.324758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.561 [2024-11-20 16:45:58.324843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.561 [2024-11-20 16:45:58.324853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:13.561 [2024-11-20 16:45:58.324863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:13.561 [2024-11-20 16:45:58.324870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.561 [2024-11-20 16:45:58.325815] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4216.992 ms, result 0 00:19:13.561 { 00:19:13.561 "name": "ftl0", 00:19:13.561 "uuid": "e1c84fdb-dcb4-448b-8dca-f6f047ea4df3" 00:19:13.561 } 00:19:13.561 16:45:58 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:19:13.561 16:45:58 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:13.561 16:45:58 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:13.561 16:45:58 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:19:13.561 16:45:58 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:13.561 16:45:58 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:13.561 16:45:58 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:13.817 16:45:58 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:14.074 [ 00:19:14.074 { 00:19:14.074 "name": "ftl0", 00:19:14.074 "aliases": [ 00:19:14.074 "e1c84fdb-dcb4-448b-8dca-f6f047ea4df3" 00:19:14.074 ], 00:19:14.074 "product_name": "FTL disk", 00:19:14.074 "block_size": 4096, 00:19:14.074 "num_blocks": 20971520, 00:19:14.074 "uuid": "e1c84fdb-dcb4-448b-8dca-f6f047ea4df3", 00:19:14.074 "assigned_rate_limits": { 00:19:14.074 "rw_ios_per_sec": 0, 00:19:14.074 "rw_mbytes_per_sec": 0, 00:19:14.074 "r_mbytes_per_sec": 0, 00:19:14.074 "w_mbytes_per_sec": 0 00:19:14.074 }, 00:19:14.074 "claimed": false, 00:19:14.074 "zoned": false, 00:19:14.074 "supported_io_types": { 00:19:14.074 "read": true, 00:19:14.074 "write": true, 00:19:14.074 "unmap": true, 00:19:14.074 "flush": true, 00:19:14.074 "reset": false, 00:19:14.074 "nvme_admin": false, 00:19:14.074 "nvme_io": false, 00:19:14.074 "nvme_io_md": false, 00:19:14.074 "write_zeroes": true, 00:19:14.074 "zcopy": false, 00:19:14.074 "get_zone_info": false, 00:19:14.074 "zone_management": false, 00:19:14.074 "zone_append": false, 00:19:14.074 "compare": false, 00:19:14.074 "compare_and_write": false, 00:19:14.074 "abort": false, 00:19:14.074 "seek_hole": false, 00:19:14.074 "seek_data": false, 00:19:14.074 "copy": false, 00:19:14.074 "nvme_iov_md": false 00:19:14.074 }, 00:19:14.074 "driver_specific": { 00:19:14.074 "ftl": { 00:19:14.074 "base_bdev": "99aada00-ac94-477d-9794-1c16cdd143b6", 00:19:14.074 "cache": "nvc0n1p0" 00:19:14.074 } 00:19:14.074 } 00:19:14.074 } 00:19:14.074 ] 00:19:14.074 16:45:58 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:19:14.074 16:45:58 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:19:14.074 16:45:58 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:14.331 16:45:59 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:19:14.331 16:45:59 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:14.589 [2024-11-20 16:45:59.230868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.589 [2024-11-20 16:45:59.230920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:14.589 [2024-11-20 16:45:59.230933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:14.589 [2024-11-20 16:45:59.230943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.589 [2024-11-20 16:45:59.230976] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:14.589 [2024-11-20 16:45:59.233692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.589 [2024-11-20 16:45:59.233723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:14.589 [2024-11-20 16:45:59.233735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.697 ms 00:19:14.589 [2024-11-20 16:45:59.233744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.589 [2024-11-20 16:45:59.234149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.589 [2024-11-20 16:45:59.234162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:14.589 [2024-11-20 16:45:59.234172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:19:14.589 [2024-11-20 16:45:59.234179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.589 [2024-11-20 16:45:59.237470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.589 [2024-11-20 16:45:59.237493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:14.589 [2024-11-20 16:45:59.237506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.264 ms 00:19:14.589 [2024-11-20 16:45:59.237520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.589 [2024-11-20 16:45:59.243995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.589 [2024-11-20 16:45:59.244024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:14.589 [2024-11-20 16:45:59.244036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.443 ms 00:19:14.589 [2024-11-20 16:45:59.244045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.589 [2024-11-20 16:45:59.268330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.589 [2024-11-20 16:45:59.268375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:14.589 [2024-11-20 16:45:59.268402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.183 ms 00:19:14.589 [2024-11-20 16:45:59.268411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.589 [2024-11-20 16:45:59.282740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.589 [2024-11-20 16:45:59.282894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:14.589 [2024-11-20 16:45:59.282919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.265 ms 00:19:14.589 [2024-11-20 16:45:59.282927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.589 [2024-11-20 16:45:59.283106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.589 [2024-11-20 16:45:59.283117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:14.589 [2024-11-20 16:45:59.283127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:19:14.589 [2024-11-20 16:45:59.283134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.589 [2024-11-20 16:45:59.306254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.589 [2024-11-20 16:45:59.306448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:14.589 [2024-11-20 16:45:59.306470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.080 ms 00:19:14.589 [2024-11-20 16:45:59.306478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.589 [2024-11-20 16:45:59.329258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.589 [2024-11-20 16:45:59.329296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:14.589 [2024-11-20 16:45:59.329309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.731 ms 00:19:14.589 [2024-11-20 16:45:59.329317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.589 [2024-11-20 16:45:59.351923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.589 [2024-11-20 16:45:59.351963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:14.589 [2024-11-20 16:45:59.351978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.557 ms 00:19:14.589 [2024-11-20 16:45:59.351986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.589 [2024-11-20 16:45:59.374431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.589 [2024-11-20 16:45:59.374469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:14.589 [2024-11-20 16:45:59.374482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.348 ms 00:19:14.589 [2024-11-20 16:45:59.374490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.589 [2024-11-20 16:45:59.374538] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:14.589 [2024-11-20 16:45:59.374551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:14.589 [2024-11-20 16:45:59.374563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:14.589 [2024-11-20 16:45:59.374571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:14.589 [2024-11-20 16:45:59.374581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:14.589 [2024-11-20 16:45:59.374589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:14.589 [2024-11-20 16:45:59.374598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:14.589 [2024-11-20 16:45:59.374605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:14.589 [2024-11-20 16:45:59.374617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:14.589 [2024-11-20 16:45:59.374624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:14.589 [2024-11-20 16:45:59.374634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:14.589 [2024-11-20 16:45:59.374641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:14.589 [2024-11-20 16:45:59.374650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:14.589 [2024-11-20 16:45:59.374658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:14.589 [2024-11-20 16:45:59.374666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:14.589 [2024-11-20 16:45:59.374673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:14.589 [2024-11-20 16:45:59.374682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:14.589 [2024-11-20 16:45:59.374690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:14.589 [2024-11-20 16:45:59.374698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:14.589 [2024-11-20 16:45:59.374705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:14.589 [2024-11-20 16:45:59.374715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:14.589 [2024-11-20 16:45:59.374722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:14.589 [2024-11-20 16:45:59.374733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:14.589 [2024-11-20 16:45:59.374740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:14.589 [2024-11-20 16:45:59.374751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:14.589 [2024-11-20 16:45:59.374758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:14.589 [2024-11-20 16:45:59.374767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:14.589 [2024-11-20 16:45:59.374775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.374784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.374791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.374799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.374806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.374816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.374823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.374831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.374838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.374847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.374855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.374863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.374870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.374881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.374888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.374898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.374908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.374918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.374925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.374934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.374941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.374952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.374959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.374968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.374975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.374984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.374991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:14.590 [2024-11-20 16:45:59.375444] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:14.590 [2024-11-20 16:45:59.375454] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e1c84fdb-dcb4-448b-8dca-f6f047ea4df3 00:19:14.590 [2024-11-20 16:45:59.375461] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:14.590 [2024-11-20 16:45:59.375472] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:14.590 [2024-11-20 16:45:59.375481] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:14.590 [2024-11-20 16:45:59.375490] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:14.590 [2024-11-20 16:45:59.375506] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:14.590 [2024-11-20 16:45:59.375516] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:14.590 [2024-11-20 16:45:59.375523] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:14.590 [2024-11-20 16:45:59.375531] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:14.590 [2024-11-20 16:45:59.375538] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:14.590 [2024-11-20 16:45:59.375546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.590 [2024-11-20 16:45:59.375553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:14.590 [2024-11-20 16:45:59.375562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.010 ms 00:19:14.590 [2024-11-20 16:45:59.375570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.590 [2024-11-20 16:45:59.388242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.591 [2024-11-20 16:45:59.388283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:14.591 [2024-11-20 16:45:59.388297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.629 ms 00:19:14.591 [2024-11-20 16:45:59.388305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.591 [2024-11-20 16:45:59.388697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.591 [2024-11-20 16:45:59.388713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:14.591 [2024-11-20 16:45:59.388723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:19:14.591 [2024-11-20 16:45:59.388730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.591 [2024-11-20 16:45:59.433507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.591 [2024-11-20 16:45:59.433709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:14.591 [2024-11-20 16:45:59.433739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.591 [2024-11-20 16:45:59.433747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.591 [2024-11-20 16:45:59.433824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.591 [2024-11-20 16:45:59.433834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:14.591 [2024-11-20 16:45:59.433844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.591 [2024-11-20 16:45:59.433851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.591 [2024-11-20 16:45:59.433958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.591 [2024-11-20 16:45:59.433977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:14.591 [2024-11-20 16:45:59.433991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.591 [2024-11-20 16:45:59.433998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.591 [2024-11-20 16:45:59.434027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.591 [2024-11-20 16:45:59.434035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:14.591 [2024-11-20 16:45:59.434045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.591 [2024-11-20 16:45:59.434053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.849 [2024-11-20 16:45:59.516123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.849 [2024-11-20 16:45:59.516175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:14.849 [2024-11-20 16:45:59.516188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.849 [2024-11-20 16:45:59.516196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.849 [2024-11-20 16:45:59.580539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.849 [2024-11-20 16:45:59.580713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:14.849 [2024-11-20 16:45:59.580733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.849 [2024-11-20 16:45:59.580741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.849 [2024-11-20 16:45:59.580836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.849 [2024-11-20 16:45:59.580846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:14.849 [2024-11-20 16:45:59.580858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.849 [2024-11-20 16:45:59.580865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.849 [2024-11-20 16:45:59.580928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.849 [2024-11-20 16:45:59.580937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:14.849 [2024-11-20 16:45:59.580946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.849 [2024-11-20 16:45:59.580953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.849 [2024-11-20 16:45:59.581055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.849 [2024-11-20 16:45:59.581064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:14.849 [2024-11-20 16:45:59.581074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.849 [2024-11-20 16:45:59.581082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.849 [2024-11-20 16:45:59.581127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.849 [2024-11-20 16:45:59.581136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:14.849 [2024-11-20 16:45:59.581145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.849 [2024-11-20 16:45:59.581152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.849 [2024-11-20 16:45:59.581189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.849 [2024-11-20 16:45:59.581198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:14.849 [2024-11-20 16:45:59.581207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.849 [2024-11-20 16:45:59.581216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.849 [2024-11-20 16:45:59.581265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.849 [2024-11-20 16:45:59.581274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:14.849 [2024-11-20 16:45:59.581284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.849 [2024-11-20 16:45:59.581291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.849 [2024-11-20 16:45:59.581462] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 350.547 ms, result 0 00:19:14.849 true 00:19:14.849 16:45:59 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75101 00:19:14.849 16:45:59 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 75101 ']' 00:19:14.849 16:45:59 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 75101 00:19:14.849 16:45:59 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:19:14.849 16:45:59 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.849 16:45:59 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75101 00:19:14.849 killing process with pid 75101 00:19:14.849 16:45:59 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:14.849 16:45:59 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:14.849 16:45:59 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75101' 00:19:14.849 16:45:59 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 75101 00:19:14.849 16:45:59 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 75101 00:19:21.411 16:46:05 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:19:21.411 16:46:05 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:21.411 16:46:05 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:19:21.411 16:46:05 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:21.411 16:46:05 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:21.411 16:46:06 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:21.411 16:46:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:21.411 16:46:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:21.411 16:46:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:21.411 16:46:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:21.411 16:46:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:21.411 16:46:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:19:21.411 16:46:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:21.411 16:46:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:21.411 16:46:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:21.411 16:46:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:19:21.411 16:46:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:21.411 16:46:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:21.411 16:46:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:21.411 16:46:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:19:21.411 16:46:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:21.411 16:46:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:21.411 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:19:21.411 fio-3.35 00:19:21.411 Starting 1 thread 00:19:25.615 00:19:25.615 test: (groupid=0, jobs=1): err= 0: pid=75302: Wed Nov 20 16:46:09 2024 00:19:25.615 read: IOPS=1394, BW=92.6MiB/s (97.1MB/s)(255MiB/2748msec) 00:19:25.615 slat (nsec): min=2960, max=19742, avg=3856.85, stdev=1677.70 00:19:25.615 clat (usec): min=228, max=715, avg=325.01, stdev=41.38 00:19:25.615 lat (usec): min=231, max=724, avg=328.87, stdev=42.09 00:19:25.615 clat percentiles (usec): 00:19:25.615 | 1.00th=[ 265], 5.00th=[ 289], 10.00th=[ 293], 20.00th=[ 297], 00:19:25.615 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 318], 60.00th=[ 322], 00:19:25.615 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 379], 95.00th=[ 404], 00:19:25.615 | 99.00th=[ 498], 99.50th=[ 529], 99.90th=[ 603], 99.95th=[ 685], 00:19:25.615 | 99.99th=[ 717] 00:19:25.615 write: IOPS=1405, BW=93.3MiB/s (97.8MB/s)(256MiB/2744msec); 0 zone resets 00:19:25.615 slat (nsec): min=13761, max=54201, avg=16736.34, stdev=2737.55 00:19:25.615 clat (usec): min=267, max=941, avg=356.65, stdev=58.70 00:19:25.615 lat (usec): min=283, max=963, avg=373.38, stdev=58.94 00:19:25.615 clat percentiles (usec): 00:19:25.615 | 1.00th=[ 306], 5.00th=[ 310], 10.00th=[ 314], 20.00th=[ 318], 00:19:25.615 | 30.00th=[ 326], 40.00th=[ 343], 50.00th=[ 347], 60.00th=[ 351], 00:19:25.615 | 70.00th=[ 359], 80.00th=[ 375], 90.00th=[ 416], 95.00th=[ 433], 00:19:25.615 | 99.00th=[ 644], 99.50th=[ 709], 99.90th=[ 848], 99.95th=[ 930], 00:19:25.615 | 99.99th=[ 938] 00:19:25.615 bw ( KiB/s): min=93160, max=102000, per=100.00%, avg=96696.00, stdev=3555.56, samples=5 00:19:25.615 iops : min= 1370, max= 1500, avg=1422.00, stdev=52.29, samples=5 00:19:25.615 lat (usec) : 250=0.22%, 500=98.04%, 750=1.55%, 1000=0.20% 00:19:25.615 cpu : usr=99.27%, sys=0.11%, ctx=6, majf=0, minf=1169 00:19:25.615 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:25.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.615 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.615 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:25.615 00:19:25.615 Run status group 0 (all jobs): 00:19:25.615 READ: bw=92.6MiB/s (97.1MB/s), 92.6MiB/s-92.6MiB/s (97.1MB/s-97.1MB/s), io=255MiB (267MB), run=2748-2748msec 00:19:25.615 WRITE: bw=93.3MiB/s (97.8MB/s), 93.3MiB/s-93.3MiB/s (97.8MB/s-97.8MB/s), io=256MiB (269MB), run=2744-2744msec 00:19:26.593 ----------------------------------------------------- 00:19:26.593 Suppressions used: 00:19:26.593 count bytes template 00:19:26.593 1 5 /usr/src/fio/parse.c 00:19:26.593 1 8 libtcmalloc_minimal.so 00:19:26.593 1 904 libcrypto.so 00:19:26.593 ----------------------------------------------------- 00:19:26.593 00:19:26.852 16:46:11 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:19:26.852 16:46:11 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:26.852 16:46:11 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:26.852 16:46:11 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:26.852 16:46:11 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:19:26.852 16:46:11 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:26.852 16:46:11 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:26.852 16:46:11 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:26.852 16:46:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:26.852 16:46:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:26.852 16:46:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:26.852 16:46:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:26.852 16:46:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:26.852 16:46:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:19:26.852 16:46:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:26.852 16:46:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:26.852 16:46:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:26.852 16:46:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:26.852 16:46:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:19:26.852 16:46:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:26.852 16:46:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:26.852 16:46:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:19:26.852 16:46:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:26.852 16:46:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:26.852 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:26.853 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:26.853 fio-3.35 00:19:26.853 Starting 2 threads 00:19:53.423 00:19:53.423 first_half: (groupid=0, jobs=1): err= 0: pid=75389: Wed Nov 20 16:46:34 2024 00:19:53.423 read: IOPS=2971, BW=11.6MiB/s (12.2MB/s)(255MiB/21960msec) 00:19:53.423 slat (nsec): min=3040, max=29914, avg=4030.97, stdev=953.68 00:19:53.423 clat (usec): min=598, max=268730, avg=34671.63, stdev=18274.98 00:19:53.423 lat (usec): min=602, max=268734, avg=34675.66, stdev=18275.01 00:19:53.423 clat percentiles (msec): 00:19:53.423 | 1.00th=[ 11], 5.00th=[ 28], 10.00th=[ 30], 20.00th=[ 30], 00:19:53.423 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:19:53.423 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 39], 95.00th=[ 48], 00:19:53.423 | 99.00th=[ 138], 99.50th=[ 153], 99.90th=[ 226], 99.95th=[ 236], 00:19:53.423 | 99.99th=[ 262] 00:19:53.423 write: IOPS=3626, BW=14.2MiB/s (14.9MB/s)(256MiB/18070msec); 0 zone resets 00:19:53.423 slat (usec): min=3, max=946, avg= 5.50, stdev= 5.09 00:19:53.423 clat (usec): min=356, max=70884, avg=8344.74, stdev=13005.03 00:19:53.423 lat (usec): min=361, max=70889, avg=8350.24, stdev=13005.02 00:19:53.423 clat percentiles (usec): 00:19:53.423 | 1.00th=[ 660], 5.00th=[ 775], 10.00th=[ 947], 20.00th=[ 1369], 00:19:53.423 | 30.00th=[ 3064], 40.00th=[ 4080], 50.00th=[ 4817], 60.00th=[ 5407], 00:19:53.423 | 70.00th=[ 6390], 80.00th=[ 9634], 90.00th=[13042], 95.00th=[53216], 00:19:53.423 | 99.00th=[62653], 99.50th=[64226], 99.90th=[68682], 99.95th=[69731], 00:19:53.423 | 99.99th=[70779] 00:19:53.423 bw ( KiB/s): min= 920, max=41824, per=100.00%, avg=26214.40, stdev=14153.59, samples=20 00:19:53.423 iops : min= 230, max=10456, avg=6553.60, stdev=3538.40, samples=20 00:19:53.423 lat (usec) : 500=0.03%, 750=2.02%, 1000=3.84% 00:19:53.423 lat (msec) : 2=5.93%, 4=8.18%, 10=21.09%, 20=6.12%, 50=47.95% 00:19:53.423 lat (msec) : 100=3.74%, 250=1.08%, 500=0.01% 00:19:53.423 cpu : usr=99.26%, sys=0.11%, ctx=42, majf=0, minf=5595 00:19:53.423 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:53.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.423 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:53.423 issued rwts: total=65253,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:53.423 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:53.423 second_half: (groupid=0, jobs=1): err= 0: pid=75390: Wed Nov 20 16:46:34 2024 00:19:53.423 read: IOPS=2950, BW=11.5MiB/s (12.1MB/s)(255MiB/22138msec) 00:19:53.423 slat (nsec): min=3013, max=51430, avg=3948.42, stdev=867.02 00:19:53.423 clat (usec): min=630, max=272607, avg=34172.11, stdev=20061.80 00:19:53.423 lat (usec): min=635, max=272611, avg=34176.06, stdev=20061.88 00:19:53.423 clat percentiles (msec): 00:19:53.423 | 1.00th=[ 7], 5.00th=[ 24], 10.00th=[ 29], 20.00th=[ 30], 00:19:53.423 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:19:53.423 | 70.00th=[ 33], 80.00th=[ 35], 90.00th=[ 39], 95.00th=[ 47], 00:19:53.423 | 99.00th=[ 144], 99.50th=[ 163], 99.90th=[ 211], 99.95th=[ 222], 00:19:53.423 | 99.99th=[ 266] 00:19:53.423 write: IOPS=3260, BW=12.7MiB/s (13.4MB/s)(256MiB/20098msec); 0 zone resets 00:19:53.423 slat (usec): min=3, max=175, avg= 5.68, stdev= 2.59 00:19:53.423 clat (usec): min=335, max=71343, avg=9162.86, stdev=14053.62 00:19:53.423 lat (usec): min=359, max=71350, avg=9168.54, stdev=14053.65 00:19:53.423 clat percentiles (usec): 00:19:53.423 | 1.00th=[ 644], 5.00th=[ 734], 10.00th=[ 824], 20.00th=[ 1090], 00:19:53.423 | 30.00th=[ 1860], 40.00th=[ 3294], 50.00th=[ 4490], 60.00th=[ 5407], 00:19:53.423 | 70.00th=[ 6652], 80.00th=[11338], 90.00th=[27395], 95.00th=[53740], 00:19:53.424 | 99.00th=[63701], 99.50th=[65274], 99.90th=[68682], 99.95th=[69731], 00:19:53.424 | 99.99th=[70779] 00:19:53.424 bw ( KiB/s): min= 408, max=61608, per=91.37%, avg=23835.00, stdev=18118.02, samples=22 00:19:53.424 iops : min= 102, max=15402, avg=5958.73, stdev=4529.48, samples=22 00:19:53.424 lat (usec) : 500=0.02%, 750=3.09%, 1000=5.27% 00:19:53.424 lat (msec) : 2=7.06%, 4=7.65%, 10=17.20%, 20=6.13%, 50=48.67% 00:19:53.424 lat (msec) : 100=3.65%, 250=1.24%, 500=0.01% 00:19:53.424 cpu : usr=99.37%, sys=0.11%, ctx=42, majf=0, minf=5522 00:19:53.424 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:53.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.424 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:53.424 issued rwts: total=65321,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:53.424 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:53.424 00:19:53.424 Run status group 0 (all jobs): 00:19:53.424 READ: bw=23.0MiB/s (24.2MB/s), 11.5MiB/s-11.6MiB/s (12.1MB/s-12.2MB/s), io=510MiB (535MB), run=21960-22138msec 00:19:53.424 WRITE: bw=25.5MiB/s (26.7MB/s), 12.7MiB/s-14.2MiB/s (13.4MB/s-14.9MB/s), io=512MiB (537MB), run=18070-20098msec 00:19:53.424 ----------------------------------------------------- 00:19:53.424 Suppressions used: 00:19:53.424 count bytes template 00:19:53.424 2 10 /usr/src/fio/parse.c 00:19:53.424 4 384 /usr/src/fio/iolog.c 00:19:53.424 1 8 libtcmalloc_minimal.so 00:19:53.424 1 904 libcrypto.so 00:19:53.424 ----------------------------------------------------- 00:19:53.424 00:19:53.424 16:46:36 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:19:53.424 16:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:53.424 16:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:53.424 16:46:36 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:53.424 16:46:36 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:19:53.424 16:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:53.424 16:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:53.424 16:46:36 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:53.424 16:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:53.424 16:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:53.424 16:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:53.424 16:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:53.424 16:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:53.424 16:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:19:53.424 16:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:53.424 16:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:53.424 16:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:19:53.424 16:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:53.424 16:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:53.424 16:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:53.424 16:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:53.424 16:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:19:53.424 16:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:53.424 16:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:53.424 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:53.424 fio-3.35 00:19:53.424 Starting 1 thread 00:20:08.349 00:20:08.349 test: (groupid=0, jobs=1): err= 0: pid=75686: Wed Nov 20 16:46:50 2024 00:20:08.349 read: IOPS=7441, BW=29.1MiB/s (30.5MB/s)(255MiB/8762msec) 00:20:08.349 slat (nsec): min=3020, max=38579, avg=4364.46, stdev=938.21 00:20:08.349 clat (usec): min=619, max=35442, avg=17192.40, stdev=3326.82 00:20:08.349 lat (usec): min=623, max=35446, avg=17196.77, stdev=3326.88 00:20:08.349 clat percentiles (usec): 00:20:08.349 | 1.00th=[13960], 5.00th=[14222], 10.00th=[14615], 20.00th=[14877], 00:20:08.349 | 30.00th=[15139], 40.00th=[15401], 50.00th=[15664], 60.00th=[16188], 00:20:08.349 | 70.00th=[17433], 80.00th=[20055], 90.00th=[22152], 95.00th=[23987], 00:20:08.349 | 99.00th=[27657], 99.50th=[29230], 99.90th=[32900], 99.95th=[33817], 00:20:08.349 | 99.99th=[34866] 00:20:08.349 write: IOPS=16.0k, BW=62.6MiB/s (65.7MB/s)(256MiB/4088msec); 0 zone resets 00:20:08.349 slat (usec): min=4, max=546, avg= 6.66, stdev= 3.72 00:20:08.349 clat (usec): min=487, max=46862, avg=7942.26, stdev=10058.65 00:20:08.349 lat (usec): min=493, max=46869, avg=7948.92, stdev=10058.65 00:20:08.349 clat percentiles (usec): 00:20:08.349 | 1.00th=[ 611], 5.00th=[ 676], 10.00th=[ 742], 20.00th=[ 922], 00:20:08.349 | 30.00th=[ 1074], 40.00th=[ 1549], 50.00th=[ 5211], 60.00th=[ 5932], 00:20:08.349 | 70.00th=[ 7046], 80.00th=[ 8717], 90.00th=[27919], 95.00th=[30278], 00:20:08.349 | 99.00th=[38536], 99.50th=[40109], 99.90th=[43254], 99.95th=[43779], 00:20:08.349 | 99.99th=[45351] 00:20:08.349 bw ( KiB/s): min= 8816, max=85184, per=90.84%, avg=58253.33, stdev=21664.92, samples=9 00:20:08.349 iops : min= 2204, max=21296, avg=14563.56, stdev=5416.21, samples=9 00:20:08.349 lat (usec) : 500=0.01%, 750=5.18%, 1000=7.44% 00:20:08.349 lat (msec) : 2=7.98%, 4=0.58%, 10=20.39%, 20=40.56%, 50=17.86% 00:20:08.349 cpu : usr=98.89%, sys=0.26%, ctx=32, majf=0, minf=5565 00:20:08.349 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:08.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:08.349 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:08.349 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:08.349 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:08.349 00:20:08.349 Run status group 0 (all jobs): 00:20:08.349 READ: bw=29.1MiB/s (30.5MB/s), 29.1MiB/s-29.1MiB/s (30.5MB/s-30.5MB/s), io=255MiB (267MB), run=8762-8762msec 00:20:08.349 WRITE: bw=62.6MiB/s (65.7MB/s), 62.6MiB/s-62.6MiB/s (65.7MB/s-65.7MB/s), io=256MiB (268MB), run=4088-4088msec 00:20:08.349 ----------------------------------------------------- 00:20:08.349 Suppressions used: 00:20:08.349 count bytes template 00:20:08.349 1 5 /usr/src/fio/parse.c 00:20:08.349 2 192 /usr/src/fio/iolog.c 00:20:08.350 1 8 libtcmalloc_minimal.so 00:20:08.350 1 904 libcrypto.so 00:20:08.350 ----------------------------------------------------- 00:20:08.350 00:20:08.350 16:46:52 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:20:08.350 16:46:52 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:08.350 16:46:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:08.350 16:46:52 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:08.350 Remove shared memory files 00:20:08.350 16:46:52 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:20:08.350 16:46:52 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:08.350 16:46:52 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:20:08.350 16:46:52 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:20:08.350 16:46:52 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57126 /dev/shm/spdk_tgt_trace.pid74024 00:20:08.350 16:46:52 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:08.350 16:46:52 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:20:08.350 ************************************ 00:20:08.350 END TEST ftl_fio_basic 00:20:08.350 ************************************ 00:20:08.350 00:20:08.350 real 1m1.828s 00:20:08.350 user 2m7.579s 00:20:08.350 sys 0m13.256s 00:20:08.350 16:46:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:08.350 16:46:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:08.350 16:46:52 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:20:08.350 16:46:52 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:08.350 16:46:52 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:08.350 16:46:52 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:08.350 ************************************ 00:20:08.350 START TEST ftl_bdevperf 00:20:08.350 ************************************ 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:20:08.350 * Looking for test storage... 00:20:08.350 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:08.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.350 --rc genhtml_branch_coverage=1 00:20:08.350 --rc genhtml_function_coverage=1 00:20:08.350 --rc genhtml_legend=1 00:20:08.350 --rc geninfo_all_blocks=1 00:20:08.350 --rc geninfo_unexecuted_blocks=1 00:20:08.350 00:20:08.350 ' 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:08.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.350 --rc genhtml_branch_coverage=1 00:20:08.350 --rc genhtml_function_coverage=1 00:20:08.350 --rc genhtml_legend=1 00:20:08.350 --rc geninfo_all_blocks=1 00:20:08.350 --rc geninfo_unexecuted_blocks=1 00:20:08.350 00:20:08.350 ' 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:08.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.350 --rc genhtml_branch_coverage=1 00:20:08.350 --rc genhtml_function_coverage=1 00:20:08.350 --rc genhtml_legend=1 00:20:08.350 --rc geninfo_all_blocks=1 00:20:08.350 --rc geninfo_unexecuted_blocks=1 00:20:08.350 00:20:08.350 ' 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:08.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.350 --rc genhtml_branch_coverage=1 00:20:08.350 --rc genhtml_function_coverage=1 00:20:08.350 --rc genhtml_legend=1 00:20:08.350 --rc geninfo_all_blocks=1 00:20:08.350 --rc geninfo_unexecuted_blocks=1 00:20:08.350 00:20:08.350 ' 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:08.350 16:46:52 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:08.351 16:46:52 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:08.351 16:46:52 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:08.351 16:46:52 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:08.351 16:46:52 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:08.351 16:46:52 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:08.351 16:46:52 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:08.351 16:46:52 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:08.351 16:46:52 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:08.351 16:46:52 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:08.351 16:46:52 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:08.351 16:46:52 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:08.351 16:46:52 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:08.351 16:46:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:20:08.351 16:46:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:20:08.351 16:46:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:20:08.351 16:46:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:08.351 16:46:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:20:08.351 16:46:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=75909 00:20:08.351 16:46:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:20:08.351 16:46:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:20:08.351 16:46:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 75909 00:20:08.351 16:46:52 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 75909 ']' 00:20:08.351 16:46:52 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.351 16:46:52 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:08.351 16:46:52 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.351 16:46:52 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:08.351 16:46:52 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:08.351 [2024-11-20 16:46:52.588110] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:20:08.351 [2024-11-20 16:46:52.589000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75909 ] 00:20:08.351 [2024-11-20 16:46:52.766206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.351 [2024-11-20 16:46:52.865647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.608 16:46:53 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.608 16:46:53 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:20:08.608 16:46:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:08.608 16:46:53 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:20:08.608 16:46:53 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:08.608 16:46:53 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:20:08.608 16:46:53 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:20:08.608 16:46:53 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:08.865 16:46:53 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:08.865 16:46:53 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:20:08.865 16:46:53 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:08.865 16:46:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:08.865 16:46:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:08.865 16:46:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:20:08.865 16:46:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:20:08.865 16:46:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:09.123 16:46:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:09.123 { 00:20:09.123 "name": "nvme0n1", 00:20:09.123 "aliases": [ 00:20:09.123 "624780cd-e3e4-4540-9aa3-3c1d2b050441" 00:20:09.123 ], 00:20:09.123 "product_name": "NVMe disk", 00:20:09.123 "block_size": 4096, 00:20:09.123 "num_blocks": 1310720, 00:20:09.123 "uuid": "624780cd-e3e4-4540-9aa3-3c1d2b050441", 00:20:09.123 "numa_id": -1, 00:20:09.123 "assigned_rate_limits": { 00:20:09.123 "rw_ios_per_sec": 0, 00:20:09.123 "rw_mbytes_per_sec": 0, 00:20:09.123 "r_mbytes_per_sec": 0, 00:20:09.123 "w_mbytes_per_sec": 0 00:20:09.123 }, 00:20:09.123 "claimed": true, 00:20:09.123 "claim_type": "read_many_write_one", 00:20:09.123 "zoned": false, 00:20:09.123 "supported_io_types": { 00:20:09.123 "read": true, 00:20:09.123 "write": true, 00:20:09.123 "unmap": true, 00:20:09.123 "flush": true, 00:20:09.123 "reset": true, 00:20:09.123 "nvme_admin": true, 00:20:09.123 "nvme_io": true, 00:20:09.123 "nvme_io_md": false, 00:20:09.123 "write_zeroes": true, 00:20:09.123 "zcopy": false, 00:20:09.123 "get_zone_info": false, 00:20:09.123 "zone_management": false, 00:20:09.123 "zone_append": false, 00:20:09.123 "compare": true, 00:20:09.123 "compare_and_write": false, 00:20:09.123 "abort": true, 00:20:09.123 "seek_hole": false, 00:20:09.123 "seek_data": false, 00:20:09.123 "copy": true, 00:20:09.123 "nvme_iov_md": false 00:20:09.123 }, 00:20:09.123 "driver_specific": { 00:20:09.123 "nvme": [ 00:20:09.123 { 00:20:09.123 "pci_address": "0000:00:11.0", 00:20:09.123 "trid": { 00:20:09.123 "trtype": "PCIe", 00:20:09.123 "traddr": "0000:00:11.0" 00:20:09.123 }, 00:20:09.123 "ctrlr_data": { 00:20:09.123 "cntlid": 0, 00:20:09.123 "vendor_id": "0x1b36", 00:20:09.123 "model_number": "QEMU NVMe Ctrl", 00:20:09.123 "serial_number": "12341", 00:20:09.123 "firmware_revision": "8.0.0", 00:20:09.123 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:09.123 "oacs": { 00:20:09.123 "security": 0, 00:20:09.123 "format": 1, 00:20:09.123 "firmware": 0, 00:20:09.123 "ns_manage": 1 00:20:09.123 }, 00:20:09.123 "multi_ctrlr": false, 00:20:09.123 "ana_reporting": false 00:20:09.123 }, 00:20:09.123 "vs": { 00:20:09.123 "nvme_version": "1.4" 00:20:09.123 }, 00:20:09.123 "ns_data": { 00:20:09.123 "id": 1, 00:20:09.123 "can_share": false 00:20:09.123 } 00:20:09.123 } 00:20:09.123 ], 00:20:09.123 "mp_policy": "active_passive" 00:20:09.123 } 00:20:09.123 } 00:20:09.123 ]' 00:20:09.123 16:46:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:09.123 16:46:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:20:09.123 16:46:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:09.123 16:46:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:09.123 16:46:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:09.123 16:46:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:20:09.123 16:46:53 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:20:09.123 16:46:53 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:09.123 16:46:53 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:20:09.123 16:46:53 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:09.123 16:46:53 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:09.381 16:46:54 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=6e26e57b-f396-40c5-b509-2f169b4689b2 00:20:09.381 16:46:54 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:20:09.381 16:46:54 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6e26e57b-f396-40c5-b509-2f169b4689b2 00:20:09.639 16:46:54 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:09.897 16:46:54 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=9944a104-2461-4e41-bb09-b96d6918901e 00:20:09.897 16:46:54 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9944a104-2461-4e41-bb09-b96d6918901e 00:20:10.155 16:46:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=ecfa643e-3298-403a-8b42-0c88e9ffb18b 00:20:10.155 16:46:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ecfa643e-3298-403a-8b42-0c88e9ffb18b 00:20:10.155 16:46:54 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:20:10.155 16:46:54 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:10.155 16:46:54 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=ecfa643e-3298-403a-8b42-0c88e9ffb18b 00:20:10.155 16:46:54 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:20:10.155 16:46:54 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size ecfa643e-3298-403a-8b42-0c88e9ffb18b 00:20:10.155 16:46:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=ecfa643e-3298-403a-8b42-0c88e9ffb18b 00:20:10.155 16:46:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:10.155 16:46:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:20:10.155 16:46:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:20:10.155 16:46:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ecfa643e-3298-403a-8b42-0c88e9ffb18b 00:20:10.155 16:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:10.155 { 00:20:10.155 "name": "ecfa643e-3298-403a-8b42-0c88e9ffb18b", 00:20:10.155 "aliases": [ 00:20:10.155 "lvs/nvme0n1p0" 00:20:10.155 ], 00:20:10.155 "product_name": "Logical Volume", 00:20:10.155 "block_size": 4096, 00:20:10.155 "num_blocks": 26476544, 00:20:10.155 "uuid": "ecfa643e-3298-403a-8b42-0c88e9ffb18b", 00:20:10.155 "assigned_rate_limits": { 00:20:10.155 "rw_ios_per_sec": 0, 00:20:10.155 "rw_mbytes_per_sec": 0, 00:20:10.155 "r_mbytes_per_sec": 0, 00:20:10.155 "w_mbytes_per_sec": 0 00:20:10.155 }, 00:20:10.155 "claimed": false, 00:20:10.155 "zoned": false, 00:20:10.155 "supported_io_types": { 00:20:10.155 "read": true, 00:20:10.155 "write": true, 00:20:10.155 "unmap": true, 00:20:10.155 "flush": false, 00:20:10.155 "reset": true, 00:20:10.155 "nvme_admin": false, 00:20:10.155 "nvme_io": false, 00:20:10.155 "nvme_io_md": false, 00:20:10.155 "write_zeroes": true, 00:20:10.155 "zcopy": false, 00:20:10.155 "get_zone_info": false, 00:20:10.155 "zone_management": false, 00:20:10.155 "zone_append": false, 00:20:10.155 "compare": false, 00:20:10.155 "compare_and_write": false, 00:20:10.155 "abort": false, 00:20:10.155 "seek_hole": true, 00:20:10.155 "seek_data": true, 00:20:10.155 "copy": false, 00:20:10.155 "nvme_iov_md": false 00:20:10.155 }, 00:20:10.155 "driver_specific": { 00:20:10.155 "lvol": { 00:20:10.155 "lvol_store_uuid": "9944a104-2461-4e41-bb09-b96d6918901e", 00:20:10.155 "base_bdev": "nvme0n1", 00:20:10.155 "thin_provision": true, 00:20:10.155 "num_allocated_clusters": 0, 00:20:10.155 "snapshot": false, 00:20:10.155 "clone": false, 00:20:10.155 "esnap_clone": false 00:20:10.155 } 00:20:10.155 } 00:20:10.155 } 00:20:10.155 ]' 00:20:10.155 16:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:10.413 16:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:20:10.413 16:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:10.414 16:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:10.414 16:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:10.414 16:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:20:10.414 16:46:55 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:20:10.414 16:46:55 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:20:10.414 16:46:55 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:10.671 16:46:55 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:10.671 16:46:55 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:10.671 16:46:55 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size ecfa643e-3298-403a-8b42-0c88e9ffb18b 00:20:10.671 16:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=ecfa643e-3298-403a-8b42-0c88e9ffb18b 00:20:10.671 16:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:10.671 16:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:20:10.671 16:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:20:10.671 16:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ecfa643e-3298-403a-8b42-0c88e9ffb18b 00:20:10.929 16:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:10.929 { 00:20:10.929 "name": "ecfa643e-3298-403a-8b42-0c88e9ffb18b", 00:20:10.929 "aliases": [ 00:20:10.929 "lvs/nvme0n1p0" 00:20:10.929 ], 00:20:10.929 "product_name": "Logical Volume", 00:20:10.929 "block_size": 4096, 00:20:10.929 "num_blocks": 26476544, 00:20:10.929 "uuid": "ecfa643e-3298-403a-8b42-0c88e9ffb18b", 00:20:10.929 "assigned_rate_limits": { 00:20:10.929 "rw_ios_per_sec": 0, 00:20:10.929 "rw_mbytes_per_sec": 0, 00:20:10.929 "r_mbytes_per_sec": 0, 00:20:10.929 "w_mbytes_per_sec": 0 00:20:10.929 }, 00:20:10.929 "claimed": false, 00:20:10.929 "zoned": false, 00:20:10.929 "supported_io_types": { 00:20:10.929 "read": true, 00:20:10.929 "write": true, 00:20:10.929 "unmap": true, 00:20:10.929 "flush": false, 00:20:10.929 "reset": true, 00:20:10.929 "nvme_admin": false, 00:20:10.929 "nvme_io": false, 00:20:10.929 "nvme_io_md": false, 00:20:10.929 "write_zeroes": true, 00:20:10.929 "zcopy": false, 00:20:10.929 "get_zone_info": false, 00:20:10.929 "zone_management": false, 00:20:10.929 "zone_append": false, 00:20:10.929 "compare": false, 00:20:10.929 "compare_and_write": false, 00:20:10.929 "abort": false, 00:20:10.929 "seek_hole": true, 00:20:10.929 "seek_data": true, 00:20:10.929 "copy": false, 00:20:10.929 "nvme_iov_md": false 00:20:10.929 }, 00:20:10.929 "driver_specific": { 00:20:10.929 "lvol": { 00:20:10.929 "lvol_store_uuid": "9944a104-2461-4e41-bb09-b96d6918901e", 00:20:10.929 "base_bdev": "nvme0n1", 00:20:10.929 "thin_provision": true, 00:20:10.929 "num_allocated_clusters": 0, 00:20:10.929 "snapshot": false, 00:20:10.929 "clone": false, 00:20:10.929 "esnap_clone": false 00:20:10.929 } 00:20:10.929 } 00:20:10.929 } 00:20:10.929 ]' 00:20:10.929 16:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:10.929 16:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:20:10.929 16:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:10.929 16:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:10.929 16:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:10.929 16:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:20:10.929 16:46:55 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:20:10.929 16:46:55 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:11.187 16:46:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:20:11.187 16:46:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size ecfa643e-3298-403a-8b42-0c88e9ffb18b 00:20:11.187 16:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=ecfa643e-3298-403a-8b42-0c88e9ffb18b 00:20:11.187 16:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:11.187 16:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:20:11.187 16:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:20:11.187 16:46:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ecfa643e-3298-403a-8b42-0c88e9ffb18b 00:20:11.187 16:46:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:11.187 { 00:20:11.187 "name": "ecfa643e-3298-403a-8b42-0c88e9ffb18b", 00:20:11.187 "aliases": [ 00:20:11.187 "lvs/nvme0n1p0" 00:20:11.187 ], 00:20:11.187 "product_name": "Logical Volume", 00:20:11.187 "block_size": 4096, 00:20:11.187 "num_blocks": 26476544, 00:20:11.187 "uuid": "ecfa643e-3298-403a-8b42-0c88e9ffb18b", 00:20:11.187 "assigned_rate_limits": { 00:20:11.187 "rw_ios_per_sec": 0, 00:20:11.187 "rw_mbytes_per_sec": 0, 00:20:11.187 "r_mbytes_per_sec": 0, 00:20:11.187 "w_mbytes_per_sec": 0 00:20:11.187 }, 00:20:11.187 "claimed": false, 00:20:11.187 "zoned": false, 00:20:11.187 "supported_io_types": { 00:20:11.187 "read": true, 00:20:11.187 "write": true, 00:20:11.187 "unmap": true, 00:20:11.187 "flush": false, 00:20:11.187 "reset": true, 00:20:11.187 "nvme_admin": false, 00:20:11.187 "nvme_io": false, 00:20:11.187 "nvme_io_md": false, 00:20:11.187 "write_zeroes": true, 00:20:11.187 "zcopy": false, 00:20:11.187 "get_zone_info": false, 00:20:11.187 "zone_management": false, 00:20:11.187 "zone_append": false, 00:20:11.187 "compare": false, 00:20:11.187 "compare_and_write": false, 00:20:11.187 "abort": false, 00:20:11.187 "seek_hole": true, 00:20:11.187 "seek_data": true, 00:20:11.187 "copy": false, 00:20:11.187 "nvme_iov_md": false 00:20:11.187 }, 00:20:11.187 "driver_specific": { 00:20:11.187 "lvol": { 00:20:11.187 "lvol_store_uuid": "9944a104-2461-4e41-bb09-b96d6918901e", 00:20:11.187 "base_bdev": "nvme0n1", 00:20:11.187 "thin_provision": true, 00:20:11.187 "num_allocated_clusters": 0, 00:20:11.187 "snapshot": false, 00:20:11.187 "clone": false, 00:20:11.187 "esnap_clone": false 00:20:11.187 } 00:20:11.187 } 00:20:11.187 } 00:20:11.187 ]' 00:20:11.187 16:46:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:11.446 16:46:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:20:11.446 16:46:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:11.446 16:46:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:11.446 16:46:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:11.446 16:46:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:20:11.446 16:46:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:20:11.446 16:46:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ecfa643e-3298-403a-8b42-0c88e9ffb18b -c nvc0n1p0 --l2p_dram_limit 20 00:20:11.446 [2024-11-20 16:46:56.307122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.446 [2024-11-20 16:46:56.307334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:11.446 [2024-11-20 16:46:56.307355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:11.446 [2024-11-20 16:46:56.307366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.446 [2024-11-20 16:46:56.307441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.446 [2024-11-20 16:46:56.307457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:11.446 [2024-11-20 16:46:56.307465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:20:11.446 [2024-11-20 16:46:56.307474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.446 [2024-11-20 16:46:56.307492] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:11.446 [2024-11-20 16:46:56.308188] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:11.446 [2024-11-20 16:46:56.308203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.446 [2024-11-20 16:46:56.308213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:11.446 [2024-11-20 16:46:56.308222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.717 ms 00:20:11.446 [2024-11-20 16:46:56.308231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.446 [2024-11-20 16:46:56.308292] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 752e9430-14eb-4cd1-867a-963d5ae929d4 00:20:11.446 [2024-11-20 16:46:56.309418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.446 [2024-11-20 16:46:56.309451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:11.446 [2024-11-20 16:46:56.309462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:11.446 [2024-11-20 16:46:56.309473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.446 [2024-11-20 16:46:56.314525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.446 [2024-11-20 16:46:56.314552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:11.446 [2024-11-20 16:46:56.314563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.016 ms 00:20:11.446 [2024-11-20 16:46:56.314570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.446 [2024-11-20 16:46:56.314655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.446 [2024-11-20 16:46:56.314664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:11.446 [2024-11-20 16:46:56.314678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:11.446 [2024-11-20 16:46:56.314685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.446 [2024-11-20 16:46:56.314721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.446 [2024-11-20 16:46:56.314730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:11.446 [2024-11-20 16:46:56.314739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:11.446 [2024-11-20 16:46:56.314749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.446 [2024-11-20 16:46:56.314768] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:11.446 [2024-11-20 16:46:56.318272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.446 [2024-11-20 16:46:56.318426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:11.446 [2024-11-20 16:46:56.318441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.510 ms 00:20:11.446 [2024-11-20 16:46:56.318450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.446 [2024-11-20 16:46:56.318482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.446 [2024-11-20 16:46:56.318492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:11.446 [2024-11-20 16:46:56.318500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:11.446 [2024-11-20 16:46:56.318509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.446 [2024-11-20 16:46:56.318537] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:11.446 [2024-11-20 16:46:56.318673] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:11.446 [2024-11-20 16:46:56.318684] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:11.446 [2024-11-20 16:46:56.318696] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:11.446 [2024-11-20 16:46:56.318706] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:11.446 [2024-11-20 16:46:56.318716] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:11.446 [2024-11-20 16:46:56.318724] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:11.446 [2024-11-20 16:46:56.318732] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:11.446 [2024-11-20 16:46:56.318740] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:11.446 [2024-11-20 16:46:56.318748] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:11.446 [2024-11-20 16:46:56.318755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.446 [2024-11-20 16:46:56.318767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:11.446 [2024-11-20 16:46:56.318775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:20:11.446 [2024-11-20 16:46:56.318783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.446 [2024-11-20 16:46:56.318863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.446 [2024-11-20 16:46:56.318872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:11.446 [2024-11-20 16:46:56.318879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:11.446 [2024-11-20 16:46:56.318889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.446 [2024-11-20 16:46:56.318989] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:11.446 [2024-11-20 16:46:56.319000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:11.446 [2024-11-20 16:46:56.319010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:11.446 [2024-11-20 16:46:56.319019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.446 [2024-11-20 16:46:56.319027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:11.446 [2024-11-20 16:46:56.319035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:11.446 [2024-11-20 16:46:56.319042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:11.446 [2024-11-20 16:46:56.319051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:11.446 [2024-11-20 16:46:56.319058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:11.446 [2024-11-20 16:46:56.319065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:11.446 [2024-11-20 16:46:56.319072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:11.446 [2024-11-20 16:46:56.319080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:11.446 [2024-11-20 16:46:56.319087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:11.446 [2024-11-20 16:46:56.319101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:11.446 [2024-11-20 16:46:56.319107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:11.446 [2024-11-20 16:46:56.319118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.446 [2024-11-20 16:46:56.319125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:11.446 [2024-11-20 16:46:56.319133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:11.446 [2024-11-20 16:46:56.319139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.446 [2024-11-20 16:46:56.319148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:11.446 [2024-11-20 16:46:56.319155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:11.446 [2024-11-20 16:46:56.319162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:11.446 [2024-11-20 16:46:56.319173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:11.446 [2024-11-20 16:46:56.319181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:11.446 [2024-11-20 16:46:56.319187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:11.446 [2024-11-20 16:46:56.319195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:11.446 [2024-11-20 16:46:56.319202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:11.446 [2024-11-20 16:46:56.319210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:11.446 [2024-11-20 16:46:56.319216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:11.446 [2024-11-20 16:46:56.319224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:11.446 [2024-11-20 16:46:56.319231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:11.446 [2024-11-20 16:46:56.319240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:11.447 [2024-11-20 16:46:56.319247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:11.447 [2024-11-20 16:46:56.319254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:11.447 [2024-11-20 16:46:56.319260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:11.447 [2024-11-20 16:46:56.319268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:11.447 [2024-11-20 16:46:56.319275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:11.447 [2024-11-20 16:46:56.319283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:11.447 [2024-11-20 16:46:56.319289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:11.447 [2024-11-20 16:46:56.319297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.447 [2024-11-20 16:46:56.319303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:11.447 [2024-11-20 16:46:56.319311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:11.447 [2024-11-20 16:46:56.319318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.447 [2024-11-20 16:46:56.319326] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:11.447 [2024-11-20 16:46:56.319334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:11.447 [2024-11-20 16:46:56.319342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:11.447 [2024-11-20 16:46:56.319349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.447 [2024-11-20 16:46:56.319360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:11.447 [2024-11-20 16:46:56.319367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:11.447 [2024-11-20 16:46:56.319375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:11.447 [2024-11-20 16:46:56.319394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:11.447 [2024-11-20 16:46:56.319403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:11.447 [2024-11-20 16:46:56.319409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:11.447 [2024-11-20 16:46:56.319420] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:11.447 [2024-11-20 16:46:56.319431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:11.447 [2024-11-20 16:46:56.319441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:11.447 [2024-11-20 16:46:56.319449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:11.447 [2024-11-20 16:46:56.319457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:11.447 [2024-11-20 16:46:56.319464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:11.447 [2024-11-20 16:46:56.319472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:11.447 [2024-11-20 16:46:56.319480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:11.447 [2024-11-20 16:46:56.319488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:11.447 [2024-11-20 16:46:56.319495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:11.447 [2024-11-20 16:46:56.319505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:11.447 [2024-11-20 16:46:56.319512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:11.447 [2024-11-20 16:46:56.319520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:11.447 [2024-11-20 16:46:56.319527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:11.447 [2024-11-20 16:46:56.319536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:11.447 [2024-11-20 16:46:56.319543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:11.447 [2024-11-20 16:46:56.319552] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:11.447 [2024-11-20 16:46:56.319560] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:11.447 [2024-11-20 16:46:56.319569] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:11.447 [2024-11-20 16:46:56.319576] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:11.447 [2024-11-20 16:46:56.319585] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:11.447 [2024-11-20 16:46:56.319593] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:11.447 [2024-11-20 16:46:56.319601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.447 [2024-11-20 16:46:56.319610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:11.447 [2024-11-20 16:46:56.319619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.677 ms 00:20:11.447 [2024-11-20 16:46:56.319625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.447 [2024-11-20 16:46:56.319743] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:11.447 [2024-11-20 16:46:56.319752] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:13.975 [2024-11-20 16:46:58.780320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.975 [2024-11-20 16:46:58.780539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:13.975 [2024-11-20 16:46:58.780620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2460.567 ms 00:20:13.975 [2024-11-20 16:46:58.780646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.975 [2024-11-20 16:46:58.806069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.975 [2024-11-20 16:46:58.806258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:13.975 [2024-11-20 16:46:58.806327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.024 ms 00:20:13.975 [2024-11-20 16:46:58.806352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.975 [2024-11-20 16:46:58.806512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.975 [2024-11-20 16:46:58.806644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:13.975 [2024-11-20 16:46:58.806674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:13.975 [2024-11-20 16:46:58.806694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.975 [2024-11-20 16:46:58.853133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.975 [2024-11-20 16:46:58.853305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:13.975 [2024-11-20 16:46:58.853387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.382 ms 00:20:13.975 [2024-11-20 16:46:58.853414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.975 [2024-11-20 16:46:58.853464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.975 [2024-11-20 16:46:58.853490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:13.975 [2024-11-20 16:46:58.853512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:13.975 [2024-11-20 16:46:58.853533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.975 [2024-11-20 16:46:58.853896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.975 [2024-11-20 16:46:58.854027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:13.975 [2024-11-20 16:46:58.854088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:20:13.975 [2024-11-20 16:46:58.854112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.975 [2024-11-20 16:46:58.854237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.975 [2024-11-20 16:46:58.854588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:13.975 [2024-11-20 16:46:58.854642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:20:13.975 [2024-11-20 16:46:58.854789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.234 [2024-11-20 16:46:58.867881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.234 [2024-11-20 16:46:58.867990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:14.234 [2024-11-20 16:46:58.868046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.045 ms 00:20:14.234 [2024-11-20 16:46:58.868069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.234 [2024-11-20 16:46:58.879312] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:20:14.234 [2024-11-20 16:46:58.884432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.234 [2024-11-20 16:46:58.884530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:14.234 [2024-11-20 16:46:58.884578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.275 ms 00:20:14.234 [2024-11-20 16:46:58.884602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.234 [2024-11-20 16:46:58.941038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.234 [2024-11-20 16:46:58.941233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:14.234 [2024-11-20 16:46:58.941296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.399 ms 00:20:14.234 [2024-11-20 16:46:58.941323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.234 [2024-11-20 16:46:58.941538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.234 [2024-11-20 16:46:58.941699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:14.234 [2024-11-20 16:46:58.941725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:20:14.234 [2024-11-20 16:46:58.941746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.234 [2024-11-20 16:46:58.964726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.234 [2024-11-20 16:46:58.964878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:14.234 [2024-11-20 16:46:58.964940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.916 ms 00:20:14.234 [2024-11-20 16:46:58.964966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.234 [2024-11-20 16:46:58.987275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.234 [2024-11-20 16:46:58.987405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:14.234 [2024-11-20 16:46:58.987469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.187 ms 00:20:14.234 [2024-11-20 16:46:58.987490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.234 [2024-11-20 16:46:58.988104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.234 [2024-11-20 16:46:58.988192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:14.234 [2024-11-20 16:46:58.988241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:20:14.234 [2024-11-20 16:46:58.988265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.234 [2024-11-20 16:46:59.054290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.234 [2024-11-20 16:46:59.054475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:14.234 [2024-11-20 16:46:59.054535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.967 ms 00:20:14.234 [2024-11-20 16:46:59.054561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.234 [2024-11-20 16:46:59.078732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.234 [2024-11-20 16:46:59.078898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:14.234 [2024-11-20 16:46:59.078954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.078 ms 00:20:14.234 [2024-11-20 16:46:59.078982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.234 [2024-11-20 16:46:59.102335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.234 [2024-11-20 16:46:59.102500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:14.234 [2024-11-20 16:46:59.102554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.294 ms 00:20:14.234 [2024-11-20 16:46:59.102578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.491 [2024-11-20 16:46:59.124996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.491 [2024-11-20 16:46:59.125144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:14.491 [2024-11-20 16:46:59.125199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.375 ms 00:20:14.491 [2024-11-20 16:46:59.125224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.491 [2024-11-20 16:46:59.125269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.491 [2024-11-20 16:46:59.125297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:14.491 [2024-11-20 16:46:59.125317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:14.491 [2024-11-20 16:46:59.125338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.491 [2024-11-20 16:46:59.125491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.491 [2024-11-20 16:46:59.125606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:14.491 [2024-11-20 16:46:59.125655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:14.491 [2024-11-20 16:46:59.125679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.491 [2024-11-20 16:46:59.126581] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2819.033 ms, result 0 00:20:14.491 { 00:20:14.491 "name": "ftl0", 00:20:14.491 "uuid": "752e9430-14eb-4cd1-867a-963d5ae929d4" 00:20:14.491 } 00:20:14.491 16:46:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:20:14.491 16:46:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:20:14.491 16:46:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:20:14.491 16:46:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:20:14.749 [2024-11-20 16:46:59.458587] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:20:14.749 I/O size of 69632 is greater than zero copy threshold (65536). 00:20:14.749 Zero copy mechanism will not be used. 00:20:14.749 Running I/O for 4 seconds... 00:20:16.625 2894.00 IOPS, 192.18 MiB/s [2024-11-20T16:47:02.891Z] 2370.50 IOPS, 157.42 MiB/s [2024-11-20T16:47:03.829Z] 2304.67 IOPS, 153.04 MiB/s [2024-11-20T16:47:03.829Z] 2259.50 IOPS, 150.04 MiB/s 00:20:18.943 Latency(us) 00:20:18.943 [2024-11-20T16:47:03.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.943 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:20:18.943 ftl0 : 4.00 2258.76 150.00 0.00 0.00 462.15 173.29 2634.04 00:20:18.943 [2024-11-20T16:47:03.829Z] =================================================================================================================== 00:20:18.943 [2024-11-20T16:47:03.829Z] Total : 2258.76 150.00 0.00 0.00 462.15 173.29 2634.04 00:20:18.943 [2024-11-20 16:47:03.468777] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:20:18.943 { 00:20:18.943 "results": [ 00:20:18.943 { 00:20:18.943 "job": "ftl0", 00:20:18.943 "core_mask": "0x1", 00:20:18.943 "workload": "randwrite", 00:20:18.943 "status": "finished", 00:20:18.943 "queue_depth": 1, 00:20:18.943 "io_size": 69632, 00:20:18.943 "runtime": 4.001745, 00:20:18.943 "iops": 2258.76461393717, 00:20:18.943 "mibps": 149.9960876442652, 00:20:18.943 "io_failed": 0, 00:20:18.943 "io_timeout": 0, 00:20:18.943 "avg_latency_us": 462.154388079008, 00:20:18.943 "min_latency_us": 173.2923076923077, 00:20:18.943 "max_latency_us": 2634.043076923077 00:20:18.943 } 00:20:18.943 ], 00:20:18.943 "core_count": 1 00:20:18.943 } 00:20:18.943 16:47:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:20:18.943 [2024-11-20 16:47:03.572043] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:20:18.943 Running I/O for 4 seconds... 00:20:20.805 11741.00 IOPS, 45.86 MiB/s [2024-11-20T16:47:06.672Z] 11364.00 IOPS, 44.39 MiB/s [2024-11-20T16:47:07.605Z] 10681.33 IOPS, 41.72 MiB/s [2024-11-20T16:47:07.605Z] 10486.75 IOPS, 40.96 MiB/s 00:20:22.719 Latency(us) 00:20:22.719 [2024-11-20T16:47:07.605Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.719 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:20:22.719 ftl0 : 4.01 10480.81 40.94 0.00 0.00 12189.52 256.79 148413.83 00:20:22.719 [2024-11-20T16:47:07.605Z] =================================================================================================================== 00:20:22.719 [2024-11-20T16:47:07.605Z] Total : 10480.81 40.94 0.00 0.00 12189.52 0.00 148413.83 00:20:22.719 [2024-11-20 16:47:07.594824] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:20:22.977 { 00:20:22.977 "results": [ 00:20:22.977 { 00:20:22.977 "job": "ftl0", 00:20:22.977 "core_mask": "0x1", 00:20:22.977 "workload": "randwrite", 00:20:22.977 "status": "finished", 00:20:22.977 "queue_depth": 128, 00:20:22.977 "io_size": 4096, 00:20:22.977 "runtime": 4.014192, 00:20:22.977 "iops": 10480.814071673702, 00:20:22.977 "mibps": 40.9406799674754, 00:20:22.977 "io_failed": 0, 00:20:22.977 "io_timeout": 0, 00:20:22.977 "avg_latency_us": 12189.516406453406, 00:20:22.977 "min_latency_us": 256.7876923076923, 00:20:22.977 "max_latency_us": 148413.83384615384 00:20:22.977 } 00:20:22.977 ], 00:20:22.977 "core_count": 1 00:20:22.977 } 00:20:22.978 16:47:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:20:22.978 [2024-11-20 16:47:07.709126] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:20:22.978 Running I/O for 4 seconds... 00:20:24.842 8341.00 IOPS, 32.58 MiB/s [2024-11-20T16:47:10.737Z] 7942.50 IOPS, 31.03 MiB/s [2024-11-20T16:47:12.109Z] 8216.00 IOPS, 32.09 MiB/s [2024-11-20T16:47:12.109Z] 8351.25 IOPS, 32.62 MiB/s 00:20:27.223 Latency(us) 00:20:27.223 [2024-11-20T16:47:12.109Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.223 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:27.223 Verification LBA range: start 0x0 length 0x1400000 00:20:27.223 ftl0 : 4.01 8364.18 32.67 0.00 0.00 15256.79 220.55 137121.48 00:20:27.223 [2024-11-20T16:47:12.109Z] =================================================================================================================== 00:20:27.223 [2024-11-20T16:47:12.109Z] Total : 8364.18 32.67 0.00 0.00 15256.79 0.00 137121.48 00:20:27.223 { 00:20:27.223 "results": [ 00:20:27.223 { 00:20:27.223 "job": "ftl0", 00:20:27.223 "core_mask": "0x1", 00:20:27.223 "workload": "verify", 00:20:27.223 "status": "finished", 00:20:27.223 "verify_range": { 00:20:27.223 "start": 0, 00:20:27.223 "length": 20971520 00:20:27.223 }, 00:20:27.223 "queue_depth": 128, 00:20:27.223 "io_size": 4096, 00:20:27.223 "runtime": 4.009002, 00:20:27.223 "iops": 8364.176420964619, 00:20:27.223 "mibps": 32.67256414439304, 00:20:27.223 "io_failed": 0, 00:20:27.223 "io_timeout": 0, 00:20:27.223 "avg_latency_us": 15256.792927811779, 00:20:27.223 "min_latency_us": 220.55384615384617, 00:20:27.223 "max_latency_us": 137121.47692307693 00:20:27.223 } 00:20:27.223 ], 00:20:27.223 "core_count": 1 00:20:27.223 } 00:20:27.223 [2024-11-20 16:47:11.736446] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:20:27.223 16:47:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:20:27.223 [2024-11-20 16:47:11.938539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.223 [2024-11-20 16:47:11.938589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:27.223 [2024-11-20 16:47:11.938604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:27.223 [2024-11-20 16:47:11.938614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.223 [2024-11-20 16:47:11.938634] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:27.223 [2024-11-20 16:47:11.941220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.223 [2024-11-20 16:47:11.941248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:27.223 [2024-11-20 16:47:11.941261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.567 ms 00:20:27.223 [2024-11-20 16:47:11.941269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.223 [2024-11-20 16:47:11.943005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.223 [2024-11-20 16:47:11.943107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:27.223 [2024-11-20 16:47:11.943128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.711 ms 00:20:27.223 [2024-11-20 16:47:11.943136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.223 [2024-11-20 16:47:12.080165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.223 [2024-11-20 16:47:12.080329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:27.223 [2024-11-20 16:47:12.080413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 136.996 ms 00:20:27.223 [2024-11-20 16:47:12.080441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.223 [2024-11-20 16:47:12.086655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.223 [2024-11-20 16:47:12.086774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:27.223 [2024-11-20 16:47:12.086833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.133 ms 00:20:27.223 [2024-11-20 16:47:12.086885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.483 [2024-11-20 16:47:12.109798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.483 [2024-11-20 16:47:12.109920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:27.483 [2024-11-20 16:47:12.109998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.835 ms 00:20:27.483 [2024-11-20 16:47:12.110021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.483 [2024-11-20 16:47:12.124052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.483 [2024-11-20 16:47:12.124182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:27.483 [2024-11-20 16:47:12.124244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.984 ms 00:20:27.483 [2024-11-20 16:47:12.124255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.483 [2024-11-20 16:47:12.124407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.483 [2024-11-20 16:47:12.124419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:27.483 [2024-11-20 16:47:12.124432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:20:27.483 [2024-11-20 16:47:12.124439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.483 [2024-11-20 16:47:12.147184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.483 [2024-11-20 16:47:12.147307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:27.483 [2024-11-20 16:47:12.147325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.726 ms 00:20:27.483 [2024-11-20 16:47:12.147333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.483 [2024-11-20 16:47:12.169231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.483 [2024-11-20 16:47:12.169261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:27.483 [2024-11-20 16:47:12.169274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.865 ms 00:20:27.483 [2024-11-20 16:47:12.169281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.483 [2024-11-20 16:47:12.190795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.483 [2024-11-20 16:47:12.190831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:27.483 [2024-11-20 16:47:12.190844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.480 ms 00:20:27.483 [2024-11-20 16:47:12.190851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.483 [2024-11-20 16:47:12.216478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.483 [2024-11-20 16:47:12.216616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:27.483 [2024-11-20 16:47:12.216639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.561 ms 00:20:27.483 [2024-11-20 16:47:12.216647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.483 [2024-11-20 16:47:12.216678] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:27.483 [2024-11-20 16:47:12.216692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:27.483 [2024-11-20 16:47:12.216703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:27.483 [2024-11-20 16:47:12.216711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:27.483 [2024-11-20 16:47:12.216720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:27.483 [2024-11-20 16:47:12.216728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:27.483 [2024-11-20 16:47:12.216737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:27.483 [2024-11-20 16:47:12.216744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:27.483 [2024-11-20 16:47:12.216754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:27.483 [2024-11-20 16:47:12.216761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:27.483 [2024-11-20 16:47:12.216771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:27.483 [2024-11-20 16:47:12.216778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:27.483 [2024-11-20 16:47:12.216787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:27.483 [2024-11-20 16:47:12.216794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:27.483 [2024-11-20 16:47:12.216804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:27.483 [2024-11-20 16:47:12.216811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:27.483 [2024-11-20 16:47:12.216821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:27.483 [2024-11-20 16:47:12.216828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:27.483 [2024-11-20 16:47:12.216837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:27.483 [2024-11-20 16:47:12.216844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:27.483 [2024-11-20 16:47:12.216855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:27.483 [2024-11-20 16:47:12.216862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:27.483 [2024-11-20 16:47:12.216871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:27.483 [2024-11-20 16:47:12.216878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:27.483 [2024-11-20 16:47:12.216887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:27.483 [2024-11-20 16:47:12.216895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:27.483 [2024-11-20 16:47:12.216903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:27.483 [2024-11-20 16:47:12.216911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:27.483 [2024-11-20 16:47:12.216921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:27.483 [2024-11-20 16:47:12.216928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:27.483 [2024-11-20 16:47:12.216938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:27.483 [2024-11-20 16:47:12.216946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:27.483 [2024-11-20 16:47:12.216955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:27.483 [2024-11-20 16:47:12.216967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:27.483 [2024-11-20 16:47:12.216977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:27.483 [2024-11-20 16:47:12.216984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:27.483 [2024-11-20 16:47:12.216993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:27.484 [2024-11-20 16:47:12.217602] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:27.484 [2024-11-20 16:47:12.217611] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 752e9430-14eb-4cd1-867a-963d5ae929d4 00:20:27.484 [2024-11-20 16:47:12.217619] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:27.484 [2024-11-20 16:47:12.217628] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:27.484 [2024-11-20 16:47:12.217637] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:27.484 [2024-11-20 16:47:12.217646] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:27.484 [2024-11-20 16:47:12.217652] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:27.484 [2024-11-20 16:47:12.217661] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:27.484 [2024-11-20 16:47:12.217668] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:27.484 [2024-11-20 16:47:12.217677] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:27.484 [2024-11-20 16:47:12.217683] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:27.484 [2024-11-20 16:47:12.217692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.484 [2024-11-20 16:47:12.217700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:27.484 [2024-11-20 16:47:12.217710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.015 ms 00:20:27.484 [2024-11-20 16:47:12.217717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.484 [2024-11-20 16:47:12.230047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.484 [2024-11-20 16:47:12.230086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:27.484 [2024-11-20 16:47:12.230100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.283 ms 00:20:27.484 [2024-11-20 16:47:12.230108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.484 [2024-11-20 16:47:12.230466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.484 [2024-11-20 16:47:12.230484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:27.484 [2024-11-20 16:47:12.230494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:20:27.484 [2024-11-20 16:47:12.230502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.484 [2024-11-20 16:47:12.265060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.484 [2024-11-20 16:47:12.265105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:27.484 [2024-11-20 16:47:12.265120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.484 [2024-11-20 16:47:12.265128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.484 [2024-11-20 16:47:12.265190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.484 [2024-11-20 16:47:12.265199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:27.484 [2024-11-20 16:47:12.265209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.484 [2024-11-20 16:47:12.265216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.484 [2024-11-20 16:47:12.265292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.485 [2024-11-20 16:47:12.265305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:27.485 [2024-11-20 16:47:12.265314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.485 [2024-11-20 16:47:12.265321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.485 [2024-11-20 16:47:12.265354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.485 [2024-11-20 16:47:12.265362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:27.485 [2024-11-20 16:47:12.265371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.485 [2024-11-20 16:47:12.265398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.485 [2024-11-20 16:47:12.340827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.485 [2024-11-20 16:47:12.340873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:27.485 [2024-11-20 16:47:12.340888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.485 [2024-11-20 16:47:12.340896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.743 [2024-11-20 16:47:12.402806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.743 [2024-11-20 16:47:12.402853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:27.743 [2024-11-20 16:47:12.402867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.743 [2024-11-20 16:47:12.402874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.743 [2024-11-20 16:47:12.402963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.743 [2024-11-20 16:47:12.402973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:27.743 [2024-11-20 16:47:12.402985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.743 [2024-11-20 16:47:12.402992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.743 [2024-11-20 16:47:12.403033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.743 [2024-11-20 16:47:12.403042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:27.743 [2024-11-20 16:47:12.403052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.743 [2024-11-20 16:47:12.403059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.743 [2024-11-20 16:47:12.403143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.743 [2024-11-20 16:47:12.403152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:27.743 [2024-11-20 16:47:12.403167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.743 [2024-11-20 16:47:12.403174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.743 [2024-11-20 16:47:12.403201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.743 [2024-11-20 16:47:12.403210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:27.743 [2024-11-20 16:47:12.403218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.743 [2024-11-20 16:47:12.403225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.743 [2024-11-20 16:47:12.403258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.743 [2024-11-20 16:47:12.403267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:27.743 [2024-11-20 16:47:12.403275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.743 [2024-11-20 16:47:12.403284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.743 [2024-11-20 16:47:12.403326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.743 [2024-11-20 16:47:12.403341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:27.743 [2024-11-20 16:47:12.403350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.743 [2024-11-20 16:47:12.403358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.743 [2024-11-20 16:47:12.403511] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 464.923 ms, result 0 00:20:27.743 true 00:20:27.743 16:47:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 75909 00:20:27.743 16:47:12 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 75909 ']' 00:20:27.743 16:47:12 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 75909 00:20:27.743 16:47:12 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:20:27.743 16:47:12 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:27.743 16:47:12 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75909 00:20:27.743 killing process with pid 75909 00:20:27.743 Received shutdown signal, test time was about 4.000000 seconds 00:20:27.743 00:20:27.743 Latency(us) 00:20:27.743 [2024-11-20T16:47:12.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.743 [2024-11-20T16:47:12.629Z] =================================================================================================================== 00:20:27.743 [2024-11-20T16:47:12.629Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:27.743 16:47:12 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:27.743 16:47:12 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:27.743 16:47:12 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75909' 00:20:27.743 16:47:12 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 75909 00:20:27.743 16:47:12 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 75909 00:20:29.642 Remove shared memory files 00:20:29.642 16:47:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:29.642 16:47:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:20:29.642 16:47:14 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:29.642 16:47:14 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:20:29.642 16:47:14 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:20:29.642 16:47:14 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:20:29.642 16:47:14 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:29.642 16:47:14 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:20:29.642 ************************************ 00:20:29.642 END TEST ftl_bdevperf 00:20:29.642 ************************************ 00:20:29.642 00:20:29.642 real 0m22.118s 00:20:29.642 user 0m24.869s 00:20:29.642 sys 0m0.837s 00:20:29.642 16:47:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:29.642 16:47:14 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:29.642 16:47:14 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:20:29.642 16:47:14 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:29.642 16:47:14 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:29.642 16:47:14 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:29.642 ************************************ 00:20:29.642 START TEST ftl_trim 00:20:29.642 ************************************ 00:20:29.642 16:47:14 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:20:29.901 * Looking for test storage... 00:20:29.901 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:29.901 16:47:14 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:29.901 16:47:14 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:29.901 16:47:14 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:20:29.901 16:47:14 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:29.901 16:47:14 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:29.901 16:47:14 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:29.901 16:47:14 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:29.901 16:47:14 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:20:29.901 16:47:14 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:20:29.901 16:47:14 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:20:29.901 16:47:14 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:20:29.901 16:47:14 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:20:29.902 16:47:14 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:20:29.902 16:47:14 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:20:29.902 16:47:14 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:29.902 16:47:14 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:20:29.902 16:47:14 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:20:29.902 16:47:14 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:29.902 16:47:14 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:29.902 16:47:14 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:20:29.902 16:47:14 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:20:29.902 16:47:14 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:29.902 16:47:14 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:20:29.902 16:47:14 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:20:29.902 16:47:14 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:20:29.902 16:47:14 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:20:29.902 16:47:14 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:29.902 16:47:14 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:20:29.902 16:47:14 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:20:29.902 16:47:14 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:29.902 16:47:14 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:29.902 16:47:14 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:20:29.902 16:47:14 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:29.902 16:47:14 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:29.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.902 --rc genhtml_branch_coverage=1 00:20:29.902 --rc genhtml_function_coverage=1 00:20:29.902 --rc genhtml_legend=1 00:20:29.902 --rc geninfo_all_blocks=1 00:20:29.902 --rc geninfo_unexecuted_blocks=1 00:20:29.902 00:20:29.902 ' 00:20:29.902 16:47:14 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:29.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.902 --rc genhtml_branch_coverage=1 00:20:29.902 --rc genhtml_function_coverage=1 00:20:29.902 --rc genhtml_legend=1 00:20:29.902 --rc geninfo_all_blocks=1 00:20:29.902 --rc geninfo_unexecuted_blocks=1 00:20:29.902 00:20:29.902 ' 00:20:29.902 16:47:14 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:29.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.902 --rc genhtml_branch_coverage=1 00:20:29.902 --rc genhtml_function_coverage=1 00:20:29.902 --rc genhtml_legend=1 00:20:29.902 --rc geninfo_all_blocks=1 00:20:29.902 --rc geninfo_unexecuted_blocks=1 00:20:29.902 00:20:29.902 ' 00:20:29.902 16:47:14 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:29.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.902 --rc genhtml_branch_coverage=1 00:20:29.902 --rc genhtml_function_coverage=1 00:20:29.902 --rc genhtml_legend=1 00:20:29.902 --rc geninfo_all_blocks=1 00:20:29.902 --rc geninfo_unexecuted_blocks=1 00:20:29.902 00:20:29.902 ' 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76250 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:20:29.902 16:47:14 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76250 00:20:29.902 16:47:14 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76250 ']' 00:20:29.902 16:47:14 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.902 16:47:14 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:29.902 16:47:14 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.902 16:47:14 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:29.902 16:47:14 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:29.902 [2024-11-20 16:47:14.734675] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:20:29.902 [2024-11-20 16:47:14.734947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76250 ] 00:20:30.161 [2024-11-20 16:47:14.889034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:30.161 [2024-11-20 16:47:15.016735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.161 [2024-11-20 16:47:15.016798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:30.161 [2024-11-20 16:47:15.016782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.095 16:47:15 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:31.095 16:47:15 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:31.095 16:47:15 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:31.095 16:47:15 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:20:31.095 16:47:15 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:31.095 16:47:15 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:20:31.095 16:47:15 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:20:31.095 16:47:15 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:31.095 16:47:15 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:31.095 16:47:15 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:20:31.095 16:47:15 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:31.095 16:47:15 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:31.095 16:47:15 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:31.095 16:47:15 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:20:31.095 16:47:15 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:20:31.095 16:47:15 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:31.354 16:47:16 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:31.354 { 00:20:31.354 "name": "nvme0n1", 00:20:31.354 "aliases": [ 00:20:31.354 "00e87644-d23a-4b01-906d-40d8cc1c2046" 00:20:31.354 ], 00:20:31.354 "product_name": "NVMe disk", 00:20:31.354 "block_size": 4096, 00:20:31.354 "num_blocks": 1310720, 00:20:31.354 "uuid": "00e87644-d23a-4b01-906d-40d8cc1c2046", 00:20:31.354 "numa_id": -1, 00:20:31.354 "assigned_rate_limits": { 00:20:31.354 "rw_ios_per_sec": 0, 00:20:31.354 "rw_mbytes_per_sec": 0, 00:20:31.354 "r_mbytes_per_sec": 0, 00:20:31.354 "w_mbytes_per_sec": 0 00:20:31.354 }, 00:20:31.354 "claimed": true, 00:20:31.354 "claim_type": "read_many_write_one", 00:20:31.354 "zoned": false, 00:20:31.354 "supported_io_types": { 00:20:31.354 "read": true, 00:20:31.354 "write": true, 00:20:31.354 "unmap": true, 00:20:31.354 "flush": true, 00:20:31.354 "reset": true, 00:20:31.354 "nvme_admin": true, 00:20:31.354 "nvme_io": true, 00:20:31.354 "nvme_io_md": false, 00:20:31.354 "write_zeroes": true, 00:20:31.354 "zcopy": false, 00:20:31.354 "get_zone_info": false, 00:20:31.354 "zone_management": false, 00:20:31.354 "zone_append": false, 00:20:31.354 "compare": true, 00:20:31.354 "compare_and_write": false, 00:20:31.354 "abort": true, 00:20:31.354 "seek_hole": false, 00:20:31.354 "seek_data": false, 00:20:31.354 "copy": true, 00:20:31.354 "nvme_iov_md": false 00:20:31.354 }, 00:20:31.354 "driver_specific": { 00:20:31.354 "nvme": [ 00:20:31.354 { 00:20:31.354 "pci_address": "0000:00:11.0", 00:20:31.354 "trid": { 00:20:31.354 "trtype": "PCIe", 00:20:31.354 "traddr": "0000:00:11.0" 00:20:31.354 }, 00:20:31.354 "ctrlr_data": { 00:20:31.354 "cntlid": 0, 00:20:31.354 "vendor_id": "0x1b36", 00:20:31.354 "model_number": "QEMU NVMe Ctrl", 00:20:31.354 "serial_number": "12341", 00:20:31.354 "firmware_revision": "8.0.0", 00:20:31.354 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:31.354 "oacs": { 00:20:31.354 "security": 0, 00:20:31.354 "format": 1, 00:20:31.354 "firmware": 0, 00:20:31.354 "ns_manage": 1 00:20:31.354 }, 00:20:31.354 "multi_ctrlr": false, 00:20:31.354 "ana_reporting": false 00:20:31.354 }, 00:20:31.354 "vs": { 00:20:31.354 "nvme_version": "1.4" 00:20:31.354 }, 00:20:31.354 "ns_data": { 00:20:31.354 "id": 1, 00:20:31.354 "can_share": false 00:20:31.354 } 00:20:31.354 } 00:20:31.354 ], 00:20:31.354 "mp_policy": "active_passive" 00:20:31.354 } 00:20:31.354 } 00:20:31.354 ]' 00:20:31.354 16:47:16 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:31.354 16:47:16 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:31.354 16:47:16 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:31.354 16:47:16 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:31.354 16:47:16 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:31.354 16:47:16 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:20:31.354 16:47:16 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:20:31.354 16:47:16 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:31.355 16:47:16 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:20:31.355 16:47:16 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:31.355 16:47:16 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:31.612 16:47:16 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=9944a104-2461-4e41-bb09-b96d6918901e 00:20:31.612 16:47:16 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:20:31.612 16:47:16 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9944a104-2461-4e41-bb09-b96d6918901e 00:20:31.869 16:47:16 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:32.127 16:47:16 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=96a65a34-ad3f-422a-8e34-241542fe539d 00:20:32.127 16:47:16 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 96a65a34-ad3f-422a-8e34-241542fe539d 00:20:32.385 16:47:17 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=f8a712b9-6ffb-4f75-b5a6-53083d7d7858 00:20:32.385 16:47:17 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f8a712b9-6ffb-4f75-b5a6-53083d7d7858 00:20:32.385 16:47:17 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:20:32.385 16:47:17 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:32.385 16:47:17 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=f8a712b9-6ffb-4f75-b5a6-53083d7d7858 00:20:32.385 16:47:17 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:20:32.385 16:47:17 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size f8a712b9-6ffb-4f75-b5a6-53083d7d7858 00:20:32.385 16:47:17 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=f8a712b9-6ffb-4f75-b5a6-53083d7d7858 00:20:32.385 16:47:17 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:32.385 16:47:17 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:20:32.385 16:47:17 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:20:32.385 16:47:17 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f8a712b9-6ffb-4f75-b5a6-53083d7d7858 00:20:32.385 16:47:17 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:32.385 { 00:20:32.385 "name": "f8a712b9-6ffb-4f75-b5a6-53083d7d7858", 00:20:32.385 "aliases": [ 00:20:32.385 "lvs/nvme0n1p0" 00:20:32.385 ], 00:20:32.385 "product_name": "Logical Volume", 00:20:32.385 "block_size": 4096, 00:20:32.385 "num_blocks": 26476544, 00:20:32.385 "uuid": "f8a712b9-6ffb-4f75-b5a6-53083d7d7858", 00:20:32.385 "assigned_rate_limits": { 00:20:32.385 "rw_ios_per_sec": 0, 00:20:32.385 "rw_mbytes_per_sec": 0, 00:20:32.385 "r_mbytes_per_sec": 0, 00:20:32.385 "w_mbytes_per_sec": 0 00:20:32.385 }, 00:20:32.385 "claimed": false, 00:20:32.385 "zoned": false, 00:20:32.385 "supported_io_types": { 00:20:32.385 "read": true, 00:20:32.385 "write": true, 00:20:32.385 "unmap": true, 00:20:32.385 "flush": false, 00:20:32.385 "reset": true, 00:20:32.385 "nvme_admin": false, 00:20:32.385 "nvme_io": false, 00:20:32.385 "nvme_io_md": false, 00:20:32.385 "write_zeroes": true, 00:20:32.385 "zcopy": false, 00:20:32.385 "get_zone_info": false, 00:20:32.385 "zone_management": false, 00:20:32.385 "zone_append": false, 00:20:32.385 "compare": false, 00:20:32.385 "compare_and_write": false, 00:20:32.385 "abort": false, 00:20:32.385 "seek_hole": true, 00:20:32.385 "seek_data": true, 00:20:32.385 "copy": false, 00:20:32.385 "nvme_iov_md": false 00:20:32.385 }, 00:20:32.385 "driver_specific": { 00:20:32.385 "lvol": { 00:20:32.385 "lvol_store_uuid": "96a65a34-ad3f-422a-8e34-241542fe539d", 00:20:32.385 "base_bdev": "nvme0n1", 00:20:32.385 "thin_provision": true, 00:20:32.385 "num_allocated_clusters": 0, 00:20:32.385 "snapshot": false, 00:20:32.385 "clone": false, 00:20:32.385 "esnap_clone": false 00:20:32.385 } 00:20:32.385 } 00:20:32.385 } 00:20:32.385 ]' 00:20:32.385 16:47:17 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:32.707 16:47:17 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:32.707 16:47:17 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:32.707 16:47:17 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:32.707 16:47:17 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:32.707 16:47:17 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:20:32.707 16:47:17 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:20:32.707 16:47:17 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:20:32.707 16:47:17 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:32.707 16:47:17 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:32.707 16:47:17 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:32.707 16:47:17 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size f8a712b9-6ffb-4f75-b5a6-53083d7d7858 00:20:32.707 16:47:17 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=f8a712b9-6ffb-4f75-b5a6-53083d7d7858 00:20:32.707 16:47:17 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:32.707 16:47:17 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:20:32.707 16:47:17 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:20:32.707 16:47:17 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f8a712b9-6ffb-4f75-b5a6-53083d7d7858 00:20:32.966 16:47:17 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:32.966 { 00:20:32.966 "name": "f8a712b9-6ffb-4f75-b5a6-53083d7d7858", 00:20:32.966 "aliases": [ 00:20:32.966 "lvs/nvme0n1p0" 00:20:32.966 ], 00:20:32.966 "product_name": "Logical Volume", 00:20:32.966 "block_size": 4096, 00:20:32.966 "num_blocks": 26476544, 00:20:32.966 "uuid": "f8a712b9-6ffb-4f75-b5a6-53083d7d7858", 00:20:32.966 "assigned_rate_limits": { 00:20:32.966 "rw_ios_per_sec": 0, 00:20:32.966 "rw_mbytes_per_sec": 0, 00:20:32.966 "r_mbytes_per_sec": 0, 00:20:32.966 "w_mbytes_per_sec": 0 00:20:32.966 }, 00:20:32.966 "claimed": false, 00:20:32.966 "zoned": false, 00:20:32.966 "supported_io_types": { 00:20:32.966 "read": true, 00:20:32.966 "write": true, 00:20:32.966 "unmap": true, 00:20:32.966 "flush": false, 00:20:32.966 "reset": true, 00:20:32.966 "nvme_admin": false, 00:20:32.966 "nvme_io": false, 00:20:32.966 "nvme_io_md": false, 00:20:32.966 "write_zeroes": true, 00:20:32.966 "zcopy": false, 00:20:32.966 "get_zone_info": false, 00:20:32.966 "zone_management": false, 00:20:32.966 "zone_append": false, 00:20:32.966 "compare": false, 00:20:32.966 "compare_and_write": false, 00:20:32.966 "abort": false, 00:20:32.966 "seek_hole": true, 00:20:32.966 "seek_data": true, 00:20:32.966 "copy": false, 00:20:32.966 "nvme_iov_md": false 00:20:32.966 }, 00:20:32.966 "driver_specific": { 00:20:32.966 "lvol": { 00:20:32.966 "lvol_store_uuid": "96a65a34-ad3f-422a-8e34-241542fe539d", 00:20:32.966 "base_bdev": "nvme0n1", 00:20:32.966 "thin_provision": true, 00:20:32.966 "num_allocated_clusters": 0, 00:20:32.966 "snapshot": false, 00:20:32.966 "clone": false, 00:20:32.966 "esnap_clone": false 00:20:32.966 } 00:20:32.966 } 00:20:32.966 } 00:20:32.966 ]' 00:20:32.966 16:47:17 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:32.966 16:47:17 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:32.966 16:47:17 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:32.966 16:47:17 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:32.966 16:47:17 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:32.966 16:47:17 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:20:32.966 16:47:17 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:20:32.966 16:47:17 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:33.226 16:47:18 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:20:33.226 16:47:18 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:20:33.226 16:47:18 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size f8a712b9-6ffb-4f75-b5a6-53083d7d7858 00:20:33.226 16:47:18 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=f8a712b9-6ffb-4f75-b5a6-53083d7d7858 00:20:33.226 16:47:18 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:33.226 16:47:18 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:20:33.226 16:47:18 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:20:33.226 16:47:18 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f8a712b9-6ffb-4f75-b5a6-53083d7d7858 00:20:33.486 16:47:18 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:33.486 { 00:20:33.486 "name": "f8a712b9-6ffb-4f75-b5a6-53083d7d7858", 00:20:33.486 "aliases": [ 00:20:33.486 "lvs/nvme0n1p0" 00:20:33.486 ], 00:20:33.486 "product_name": "Logical Volume", 00:20:33.486 "block_size": 4096, 00:20:33.486 "num_blocks": 26476544, 00:20:33.486 "uuid": "f8a712b9-6ffb-4f75-b5a6-53083d7d7858", 00:20:33.486 "assigned_rate_limits": { 00:20:33.486 "rw_ios_per_sec": 0, 00:20:33.486 "rw_mbytes_per_sec": 0, 00:20:33.486 "r_mbytes_per_sec": 0, 00:20:33.486 "w_mbytes_per_sec": 0 00:20:33.486 }, 00:20:33.486 "claimed": false, 00:20:33.486 "zoned": false, 00:20:33.486 "supported_io_types": { 00:20:33.486 "read": true, 00:20:33.486 "write": true, 00:20:33.486 "unmap": true, 00:20:33.486 "flush": false, 00:20:33.486 "reset": true, 00:20:33.486 "nvme_admin": false, 00:20:33.486 "nvme_io": false, 00:20:33.486 "nvme_io_md": false, 00:20:33.486 "write_zeroes": true, 00:20:33.486 "zcopy": false, 00:20:33.486 "get_zone_info": false, 00:20:33.486 "zone_management": false, 00:20:33.486 "zone_append": false, 00:20:33.486 "compare": false, 00:20:33.486 "compare_and_write": false, 00:20:33.486 "abort": false, 00:20:33.486 "seek_hole": true, 00:20:33.486 "seek_data": true, 00:20:33.486 "copy": false, 00:20:33.486 "nvme_iov_md": false 00:20:33.486 }, 00:20:33.486 "driver_specific": { 00:20:33.486 "lvol": { 00:20:33.486 "lvol_store_uuid": "96a65a34-ad3f-422a-8e34-241542fe539d", 00:20:33.486 "base_bdev": "nvme0n1", 00:20:33.486 "thin_provision": true, 00:20:33.486 "num_allocated_clusters": 0, 00:20:33.486 "snapshot": false, 00:20:33.486 "clone": false, 00:20:33.486 "esnap_clone": false 00:20:33.486 } 00:20:33.486 } 00:20:33.486 } 00:20:33.486 ]' 00:20:33.486 16:47:18 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:33.486 16:47:18 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:33.486 16:47:18 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:33.486 16:47:18 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:33.486 16:47:18 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:33.486 16:47:18 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:20:33.486 16:47:18 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:20:33.486 16:47:18 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f8a712b9-6ffb-4f75-b5a6-53083d7d7858 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:20:33.810 [2024-11-20 16:47:18.474580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.810 [2024-11-20 16:47:18.474630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:33.810 [2024-11-20 16:47:18.474645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:33.810 [2024-11-20 16:47:18.474653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.810 [2024-11-20 16:47:18.477500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.810 [2024-11-20 16:47:18.477542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:33.810 [2024-11-20 16:47:18.477554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.813 ms 00:20:33.810 [2024-11-20 16:47:18.477562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.810 [2024-11-20 16:47:18.477747] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:33.810 [2024-11-20 16:47:18.478479] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:33.810 [2024-11-20 16:47:18.478509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.810 [2024-11-20 16:47:18.478517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:33.810 [2024-11-20 16:47:18.478526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.769 ms 00:20:33.810 [2024-11-20 16:47:18.478534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.810 [2024-11-20 16:47:18.478625] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b7e88b07-7c78-4b45-9425-1d3115e8b9f8 00:20:33.810 [2024-11-20 16:47:18.479611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.810 [2024-11-20 16:47:18.479739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:33.810 [2024-11-20 16:47:18.479755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:33.810 [2024-11-20 16:47:18.479764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.810 [2024-11-20 16:47:18.484874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.810 [2024-11-20 16:47:18.484904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:33.810 [2024-11-20 16:47:18.484917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.039 ms 00:20:33.810 [2024-11-20 16:47:18.484929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.810 [2024-11-20 16:47:18.485054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.810 [2024-11-20 16:47:18.485067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:33.810 [2024-11-20 16:47:18.485075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:33.810 [2024-11-20 16:47:18.485086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.810 [2024-11-20 16:47:18.485120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.810 [2024-11-20 16:47:18.485130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:33.810 [2024-11-20 16:47:18.485138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:33.810 [2024-11-20 16:47:18.485146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.810 [2024-11-20 16:47:18.485175] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:33.810 [2024-11-20 16:47:18.488697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.810 [2024-11-20 16:47:18.488725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:33.810 [2024-11-20 16:47:18.488740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.524 ms 00:20:33.810 [2024-11-20 16:47:18.488748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.810 [2024-11-20 16:47:18.488789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.810 [2024-11-20 16:47:18.488797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:33.810 [2024-11-20 16:47:18.488806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:33.810 [2024-11-20 16:47:18.488825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.810 [2024-11-20 16:47:18.488856] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:33.810 [2024-11-20 16:47:18.488988] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:33.810 [2024-11-20 16:47:18.489002] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:33.810 [2024-11-20 16:47:18.489013] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:33.810 [2024-11-20 16:47:18.489025] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:33.810 [2024-11-20 16:47:18.489034] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:33.810 [2024-11-20 16:47:18.489043] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:33.810 [2024-11-20 16:47:18.489050] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:33.810 [2024-11-20 16:47:18.489058] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:33.810 [2024-11-20 16:47:18.489067] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:33.810 [2024-11-20 16:47:18.489076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.810 [2024-11-20 16:47:18.489083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:33.811 [2024-11-20 16:47:18.489092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:20:33.811 [2024-11-20 16:47:18.489099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.811 [2024-11-20 16:47:18.489208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.811 [2024-11-20 16:47:18.489221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:33.811 [2024-11-20 16:47:18.489231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:33.811 [2024-11-20 16:47:18.489238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.811 [2024-11-20 16:47:18.489353] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:33.811 [2024-11-20 16:47:18.489361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:33.811 [2024-11-20 16:47:18.489371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:33.811 [2024-11-20 16:47:18.489394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.811 [2024-11-20 16:47:18.489404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:33.811 [2024-11-20 16:47:18.489411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:33.811 [2024-11-20 16:47:18.489419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:33.811 [2024-11-20 16:47:18.489426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:33.811 [2024-11-20 16:47:18.489435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:33.811 [2024-11-20 16:47:18.489441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:33.811 [2024-11-20 16:47:18.489449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:33.811 [2024-11-20 16:47:18.489456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:33.811 [2024-11-20 16:47:18.489463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:33.811 [2024-11-20 16:47:18.489470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:33.811 [2024-11-20 16:47:18.489478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:33.811 [2024-11-20 16:47:18.489484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.811 [2024-11-20 16:47:18.489494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:33.811 [2024-11-20 16:47:18.489502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:33.811 [2024-11-20 16:47:18.489511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.811 [2024-11-20 16:47:18.489517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:33.811 [2024-11-20 16:47:18.489535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:33.811 [2024-11-20 16:47:18.489542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:33.811 [2024-11-20 16:47:18.489550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:33.811 [2024-11-20 16:47:18.489557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:33.811 [2024-11-20 16:47:18.489564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:33.811 [2024-11-20 16:47:18.489571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:33.811 [2024-11-20 16:47:18.489579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:33.811 [2024-11-20 16:47:18.489585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:33.811 [2024-11-20 16:47:18.489593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:33.811 [2024-11-20 16:47:18.489600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:33.811 [2024-11-20 16:47:18.489608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:33.811 [2024-11-20 16:47:18.489615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:33.811 [2024-11-20 16:47:18.489625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:33.811 [2024-11-20 16:47:18.489631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:33.811 [2024-11-20 16:47:18.489639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:33.811 [2024-11-20 16:47:18.489645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:33.811 [2024-11-20 16:47:18.489653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:33.811 [2024-11-20 16:47:18.489659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:33.811 [2024-11-20 16:47:18.489667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:33.811 [2024-11-20 16:47:18.489673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.811 [2024-11-20 16:47:18.489685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:33.811 [2024-11-20 16:47:18.489692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:33.811 [2024-11-20 16:47:18.489700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.811 [2024-11-20 16:47:18.489706] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:33.811 [2024-11-20 16:47:18.489715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:33.811 [2024-11-20 16:47:18.489722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:33.811 [2024-11-20 16:47:18.489731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.811 [2024-11-20 16:47:18.489738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:33.811 [2024-11-20 16:47:18.489748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:33.811 [2024-11-20 16:47:18.489755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:33.811 [2024-11-20 16:47:18.489765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:33.811 [2024-11-20 16:47:18.489771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:33.811 [2024-11-20 16:47:18.489779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:33.811 [2024-11-20 16:47:18.489788] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:33.811 [2024-11-20 16:47:18.489800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:33.811 [2024-11-20 16:47:18.489808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:33.811 [2024-11-20 16:47:18.489816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:33.811 [2024-11-20 16:47:18.489824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:33.811 [2024-11-20 16:47:18.489832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:33.811 [2024-11-20 16:47:18.489839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:33.811 [2024-11-20 16:47:18.489848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:33.811 [2024-11-20 16:47:18.489854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:33.811 [2024-11-20 16:47:18.489863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:33.811 [2024-11-20 16:47:18.489870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:33.811 [2024-11-20 16:47:18.489880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:33.811 [2024-11-20 16:47:18.489886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:33.811 [2024-11-20 16:47:18.489895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:33.811 [2024-11-20 16:47:18.489902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:33.811 [2024-11-20 16:47:18.489911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:33.811 [2024-11-20 16:47:18.489917] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:33.811 [2024-11-20 16:47:18.489932] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:33.811 [2024-11-20 16:47:18.489940] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:33.811 [2024-11-20 16:47:18.489948] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:33.811 [2024-11-20 16:47:18.489955] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:33.811 [2024-11-20 16:47:18.489964] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:33.811 [2024-11-20 16:47:18.489971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.811 [2024-11-20 16:47:18.489980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:33.811 [2024-11-20 16:47:18.489987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.688 ms 00:20:33.811 [2024-11-20 16:47:18.489995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.811 [2024-11-20 16:47:18.490060] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:33.812 [2024-11-20 16:47:18.490077] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:36.365 [2024-11-20 16:47:20.957138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.365 [2024-11-20 16:47:20.957200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:36.365 [2024-11-20 16:47:20.957217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2467.067 ms 00:20:36.365 [2024-11-20 16:47:20.957228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.365 [2024-11-20 16:47:20.982210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.365 [2024-11-20 16:47:20.982260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:36.365 [2024-11-20 16:47:20.982273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.736 ms 00:20:36.365 [2024-11-20 16:47:20.982282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.365 [2024-11-20 16:47:20.982431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.365 [2024-11-20 16:47:20.982444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:36.365 [2024-11-20 16:47:20.982453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:20:36.365 [2024-11-20 16:47:20.982464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.365 [2024-11-20 16:47:21.025961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.365 [2024-11-20 16:47:21.026023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:36.365 [2024-11-20 16:47:21.026042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.447 ms 00:20:36.365 [2024-11-20 16:47:21.026057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.365 [2024-11-20 16:47:21.026184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.365 [2024-11-20 16:47:21.026203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:36.365 [2024-11-20 16:47:21.026216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:36.365 [2024-11-20 16:47:21.026228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.365 [2024-11-20 16:47:21.026624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.365 [2024-11-20 16:47:21.026650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:36.365 [2024-11-20 16:47:21.026663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:20:36.365 [2024-11-20 16:47:21.026676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.365 [2024-11-20 16:47:21.026835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.365 [2024-11-20 16:47:21.026855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:36.365 [2024-11-20 16:47:21.026867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:20:36.365 [2024-11-20 16:47:21.026881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.365 [2024-11-20 16:47:21.041329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.365 [2024-11-20 16:47:21.041478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:36.365 [2024-11-20 16:47:21.041493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.395 ms 00:20:36.365 [2024-11-20 16:47:21.041501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.365 [2024-11-20 16:47:21.050632] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:36.365 [2024-11-20 16:47:21.063239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.365 [2024-11-20 16:47:21.063284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:36.365 [2024-11-20 16:47:21.063299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.638 ms 00:20:36.365 [2024-11-20 16:47:21.063306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.365 [2024-11-20 16:47:21.117124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.365 [2024-11-20 16:47:21.117293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:36.365 [2024-11-20 16:47:21.117313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.745 ms 00:20:36.365 [2024-11-20 16:47:21.117320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.365 [2024-11-20 16:47:21.117513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.365 [2024-11-20 16:47:21.117523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:36.365 [2024-11-20 16:47:21.117534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:20:36.365 [2024-11-20 16:47:21.117540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.365 [2024-11-20 16:47:21.135740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.365 [2024-11-20 16:47:21.135854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:36.365 [2024-11-20 16:47:21.135872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.175 ms 00:20:36.365 [2024-11-20 16:47:21.135878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.365 [2024-11-20 16:47:21.153414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.365 [2024-11-20 16:47:21.153445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:36.365 [2024-11-20 16:47:21.153457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.494 ms 00:20:36.365 [2024-11-20 16:47:21.153463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.365 [2024-11-20 16:47:21.153935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.365 [2024-11-20 16:47:21.153943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:36.365 [2024-11-20 16:47:21.153951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:20:36.365 [2024-11-20 16:47:21.153957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.365 [2024-11-20 16:47:21.212958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.365 [2024-11-20 16:47:21.213006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:36.365 [2024-11-20 16:47:21.213024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.970 ms 00:20:36.365 [2024-11-20 16:47:21.213031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.365 [2024-11-20 16:47:21.232912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.365 [2024-11-20 16:47:21.232955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:36.365 [2024-11-20 16:47:21.232967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.807 ms 00:20:36.365 [2024-11-20 16:47:21.232975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.623 [2024-11-20 16:47:21.253987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.623 [2024-11-20 16:47:21.254032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:36.623 [2024-11-20 16:47:21.254045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.953 ms 00:20:36.623 [2024-11-20 16:47:21.254052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.623 [2024-11-20 16:47:21.273246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.623 [2024-11-20 16:47:21.273287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:36.623 [2024-11-20 16:47:21.273301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.132 ms 00:20:36.623 [2024-11-20 16:47:21.273318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.623 [2024-11-20 16:47:21.273364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.623 [2024-11-20 16:47:21.273374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:36.623 [2024-11-20 16:47:21.273399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:36.623 [2024-11-20 16:47:21.273405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.623 [2024-11-20 16:47:21.273472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.623 [2024-11-20 16:47:21.273479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:36.623 [2024-11-20 16:47:21.273487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:36.623 [2024-11-20 16:47:21.273493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.623 [2024-11-20 16:47:21.274159] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:36.623 [2024-11-20 16:47:21.276589] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2799.358 ms, result 0 00:20:36.623 [2024-11-20 16:47:21.277216] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:36.623 { 00:20:36.623 "name": "ftl0", 00:20:36.623 "uuid": "b7e88b07-7c78-4b45-9425-1d3115e8b9f8" 00:20:36.623 } 00:20:36.623 16:47:21 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:20:36.623 16:47:21 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:20:36.623 16:47:21 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:36.623 16:47:21 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:20:36.623 16:47:21 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:36.623 16:47:21 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:36.623 16:47:21 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:36.623 16:47:21 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:20:36.881 [ 00:20:36.881 { 00:20:36.881 "name": "ftl0", 00:20:36.881 "aliases": [ 00:20:36.881 "b7e88b07-7c78-4b45-9425-1d3115e8b9f8" 00:20:36.881 ], 00:20:36.881 "product_name": "FTL disk", 00:20:36.881 "block_size": 4096, 00:20:36.881 "num_blocks": 23592960, 00:20:36.881 "uuid": "b7e88b07-7c78-4b45-9425-1d3115e8b9f8", 00:20:36.881 "assigned_rate_limits": { 00:20:36.881 "rw_ios_per_sec": 0, 00:20:36.881 "rw_mbytes_per_sec": 0, 00:20:36.881 "r_mbytes_per_sec": 0, 00:20:36.881 "w_mbytes_per_sec": 0 00:20:36.881 }, 00:20:36.881 "claimed": false, 00:20:36.881 "zoned": false, 00:20:36.881 "supported_io_types": { 00:20:36.881 "read": true, 00:20:36.881 "write": true, 00:20:36.881 "unmap": true, 00:20:36.881 "flush": true, 00:20:36.881 "reset": false, 00:20:36.881 "nvme_admin": false, 00:20:36.881 "nvme_io": false, 00:20:36.881 "nvme_io_md": false, 00:20:36.881 "write_zeroes": true, 00:20:36.881 "zcopy": false, 00:20:36.881 "get_zone_info": false, 00:20:36.881 "zone_management": false, 00:20:36.881 "zone_append": false, 00:20:36.881 "compare": false, 00:20:36.881 "compare_and_write": false, 00:20:36.881 "abort": false, 00:20:36.881 "seek_hole": false, 00:20:36.881 "seek_data": false, 00:20:36.881 "copy": false, 00:20:36.881 "nvme_iov_md": false 00:20:36.881 }, 00:20:36.881 "driver_specific": { 00:20:36.881 "ftl": { 00:20:36.881 "base_bdev": "f8a712b9-6ffb-4f75-b5a6-53083d7d7858", 00:20:36.881 "cache": "nvc0n1p0" 00:20:36.881 } 00:20:36.881 } 00:20:36.881 } 00:20:36.881 ] 00:20:36.881 16:47:21 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:20:36.881 16:47:21 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:20:36.881 16:47:21 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:37.137 16:47:21 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:20:37.137 16:47:21 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:20:37.394 16:47:22 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:20:37.394 { 00:20:37.394 "name": "ftl0", 00:20:37.394 "aliases": [ 00:20:37.394 "b7e88b07-7c78-4b45-9425-1d3115e8b9f8" 00:20:37.394 ], 00:20:37.394 "product_name": "FTL disk", 00:20:37.394 "block_size": 4096, 00:20:37.394 "num_blocks": 23592960, 00:20:37.394 "uuid": "b7e88b07-7c78-4b45-9425-1d3115e8b9f8", 00:20:37.394 "assigned_rate_limits": { 00:20:37.394 "rw_ios_per_sec": 0, 00:20:37.394 "rw_mbytes_per_sec": 0, 00:20:37.394 "r_mbytes_per_sec": 0, 00:20:37.394 "w_mbytes_per_sec": 0 00:20:37.394 }, 00:20:37.394 "claimed": false, 00:20:37.394 "zoned": false, 00:20:37.394 "supported_io_types": { 00:20:37.394 "read": true, 00:20:37.394 "write": true, 00:20:37.394 "unmap": true, 00:20:37.394 "flush": true, 00:20:37.394 "reset": false, 00:20:37.394 "nvme_admin": false, 00:20:37.394 "nvme_io": false, 00:20:37.394 "nvme_io_md": false, 00:20:37.394 "write_zeroes": true, 00:20:37.394 "zcopy": false, 00:20:37.394 "get_zone_info": false, 00:20:37.394 "zone_management": false, 00:20:37.394 "zone_append": false, 00:20:37.394 "compare": false, 00:20:37.394 "compare_and_write": false, 00:20:37.394 "abort": false, 00:20:37.394 "seek_hole": false, 00:20:37.394 "seek_data": false, 00:20:37.394 "copy": false, 00:20:37.394 "nvme_iov_md": false 00:20:37.394 }, 00:20:37.394 "driver_specific": { 00:20:37.394 "ftl": { 00:20:37.394 "base_bdev": "f8a712b9-6ffb-4f75-b5a6-53083d7d7858", 00:20:37.394 "cache": "nvc0n1p0" 00:20:37.394 } 00:20:37.394 } 00:20:37.394 } 00:20:37.394 ]' 00:20:37.394 16:47:22 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:20:37.394 16:47:22 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:20:37.394 16:47:22 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:37.653 [2024-11-20 16:47:22.342906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.653 [2024-11-20 16:47:22.343170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:37.653 [2024-11-20 16:47:22.343196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:37.653 [2024-11-20 16:47:22.343211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.653 [2024-11-20 16:47:22.343247] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:37.653 [2024-11-20 16:47:22.346653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.653 [2024-11-20 16:47:22.346689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:37.653 [2024-11-20 16:47:22.346708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.381 ms 00:20:37.653 [2024-11-20 16:47:22.346718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.653 [2024-11-20 16:47:22.347266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.653 [2024-11-20 16:47:22.347291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:37.653 [2024-11-20 16:47:22.347304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.494 ms 00:20:37.653 [2024-11-20 16:47:22.347315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.653 [2024-11-20 16:47:22.351621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.653 [2024-11-20 16:47:22.351650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:37.653 [2024-11-20 16:47:22.351662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.273 ms 00:20:37.653 [2024-11-20 16:47:22.351670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.653 [2024-11-20 16:47:22.359013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.653 [2024-11-20 16:47:22.359042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:37.653 [2024-11-20 16:47:22.359054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.287 ms 00:20:37.653 [2024-11-20 16:47:22.359061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.653 [2024-11-20 16:47:22.385420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.653 [2024-11-20 16:47:22.385565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:37.653 [2024-11-20 16:47:22.385588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.284 ms 00:20:37.653 [2024-11-20 16:47:22.385597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.653 [2024-11-20 16:47:22.400868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.653 [2024-11-20 16:47:22.401014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:37.653 [2024-11-20 16:47:22.401035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.196 ms 00:20:37.653 [2024-11-20 16:47:22.401046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.653 [2024-11-20 16:47:22.401248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.653 [2024-11-20 16:47:22.401259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:37.653 [2024-11-20 16:47:22.401270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:20:37.653 [2024-11-20 16:47:22.401277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.653 [2024-11-20 16:47:22.423650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.654 [2024-11-20 16:47:22.423800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:37.654 [2024-11-20 16:47:22.423819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.344 ms 00:20:37.654 [2024-11-20 16:47:22.423827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.654 [2024-11-20 16:47:22.446055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.654 [2024-11-20 16:47:22.446093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:37.654 [2024-11-20 16:47:22.446108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.171 ms 00:20:37.654 [2024-11-20 16:47:22.446117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.654 [2024-11-20 16:47:22.467855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.654 [2024-11-20 16:47:22.467892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:37.654 [2024-11-20 16:47:22.467905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.672 ms 00:20:37.654 [2024-11-20 16:47:22.467914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.654 [2024-11-20 16:47:22.489572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.654 [2024-11-20 16:47:22.489737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:37.654 [2024-11-20 16:47:22.489757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.553 ms 00:20:37.654 [2024-11-20 16:47:22.489765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.654 [2024-11-20 16:47:22.489821] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:37.654 [2024-11-20 16:47:22.489836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.489848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.489856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.489865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.489873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.489884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.489892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.489901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.489908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.489918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.489925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.489934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.489941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.489950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.489958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.489967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.489974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.489983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.489990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.489999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:37.654 [2024-11-20 16:47:22.490502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:37.655 [2024-11-20 16:47:22.490510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:37.655 [2024-11-20 16:47:22.490518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:37.655 [2024-11-20 16:47:22.490526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:37.655 [2024-11-20 16:47:22.490535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:37.655 [2024-11-20 16:47:22.490542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:37.655 [2024-11-20 16:47:22.490551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:37.655 [2024-11-20 16:47:22.490558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:37.655 [2024-11-20 16:47:22.490567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:37.655 [2024-11-20 16:47:22.490574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:37.655 [2024-11-20 16:47:22.490584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:37.655 [2024-11-20 16:47:22.490591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:37.655 [2024-11-20 16:47:22.490600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:37.655 [2024-11-20 16:47:22.490607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:37.655 [2024-11-20 16:47:22.490616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:37.655 [2024-11-20 16:47:22.490623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:37.655 [2024-11-20 16:47:22.490631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:37.655 [2024-11-20 16:47:22.490638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:37.655 [2024-11-20 16:47:22.490647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:37.655 [2024-11-20 16:47:22.490655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:37.655 [2024-11-20 16:47:22.490663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:37.655 [2024-11-20 16:47:22.490670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:37.655 [2024-11-20 16:47:22.490679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:37.655 [2024-11-20 16:47:22.490686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:37.655 [2024-11-20 16:47:22.490696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:37.655 [2024-11-20 16:47:22.490712] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:37.655 [2024-11-20 16:47:22.490723] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b7e88b07-7c78-4b45-9425-1d3115e8b9f8 00:20:37.655 [2024-11-20 16:47:22.490730] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:37.655 [2024-11-20 16:47:22.490739] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:37.655 [2024-11-20 16:47:22.490745] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:37.655 [2024-11-20 16:47:22.490755] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:37.655 [2024-11-20 16:47:22.490763] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:37.655 [2024-11-20 16:47:22.490772] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:37.655 [2024-11-20 16:47:22.490779] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:37.655 [2024-11-20 16:47:22.490787] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:37.655 [2024-11-20 16:47:22.490794] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:37.655 [2024-11-20 16:47:22.490803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.655 [2024-11-20 16:47:22.490810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:37.655 [2024-11-20 16:47:22.490819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.983 ms 00:20:37.655 [2024-11-20 16:47:22.490826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.655 [2024-11-20 16:47:22.503273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.655 [2024-11-20 16:47:22.503304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:37.655 [2024-11-20 16:47:22.503321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.414 ms 00:20:37.655 [2024-11-20 16:47:22.503329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.655 [2024-11-20 16:47:22.503716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.655 [2024-11-20 16:47:22.503735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:37.655 [2024-11-20 16:47:22.503745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:20:37.655 [2024-11-20 16:47:22.503753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.914 [2024-11-20 16:47:22.547172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.914 [2024-11-20 16:47:22.547214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:37.914 [2024-11-20 16:47:22.547227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.914 [2024-11-20 16:47:22.547236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.914 [2024-11-20 16:47:22.547350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.914 [2024-11-20 16:47:22.547359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:37.914 [2024-11-20 16:47:22.547368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.914 [2024-11-20 16:47:22.547375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.914 [2024-11-20 16:47:22.547451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.914 [2024-11-20 16:47:22.547460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:37.914 [2024-11-20 16:47:22.547474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.914 [2024-11-20 16:47:22.547481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.914 [2024-11-20 16:47:22.547510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.914 [2024-11-20 16:47:22.547517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:37.914 [2024-11-20 16:47:22.547526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.914 [2024-11-20 16:47:22.547533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.914 [2024-11-20 16:47:22.627740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.914 [2024-11-20 16:47:22.628488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:37.914 [2024-11-20 16:47:22.628514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.914 [2024-11-20 16:47:22.628525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.914 [2024-11-20 16:47:22.690931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.914 [2024-11-20 16:47:22.690980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:37.914 [2024-11-20 16:47:22.690993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.914 [2024-11-20 16:47:22.691001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.914 [2024-11-20 16:47:22.691094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.914 [2024-11-20 16:47:22.691104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:37.914 [2024-11-20 16:47:22.691128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.914 [2024-11-20 16:47:22.691138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.914 [2024-11-20 16:47:22.691187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.914 [2024-11-20 16:47:22.691195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:37.914 [2024-11-20 16:47:22.691205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.914 [2024-11-20 16:47:22.691212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.914 [2024-11-20 16:47:22.691317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.914 [2024-11-20 16:47:22.691326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:37.914 [2024-11-20 16:47:22.691335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.914 [2024-11-20 16:47:22.691342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.914 [2024-11-20 16:47:22.691411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.914 [2024-11-20 16:47:22.691434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:37.914 [2024-11-20 16:47:22.691443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.914 [2024-11-20 16:47:22.691450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.914 [2024-11-20 16:47:22.691498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.914 [2024-11-20 16:47:22.691506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:37.914 [2024-11-20 16:47:22.691518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.914 [2024-11-20 16:47:22.691525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.914 [2024-11-20 16:47:22.691578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.914 [2024-11-20 16:47:22.691587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:37.914 [2024-11-20 16:47:22.691597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.914 [2024-11-20 16:47:22.691604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.914 [2024-11-20 16:47:22.691771] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 348.853 ms, result 0 00:20:37.914 true 00:20:37.914 16:47:22 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76250 00:20:37.914 16:47:22 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76250 ']' 00:20:37.914 16:47:22 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76250 00:20:37.914 16:47:22 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:37.914 16:47:22 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:37.914 16:47:22 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76250 00:20:37.914 killing process with pid 76250 00:20:37.914 16:47:22 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:37.914 16:47:22 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:37.914 16:47:22 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76250' 00:20:37.914 16:47:22 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76250 00:20:37.915 16:47:22 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76250 00:20:44.578 16:47:29 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:20:45.515 65536+0 records in 00:20:45.515 65536+0 records out 00:20:45.515 268435456 bytes (268 MB, 256 MiB) copied, 1.07684 s, 249 MB/s 00:20:45.515 16:47:30 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:45.515 [2024-11-20 16:47:30.180797] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:20:45.515 [2024-11-20 16:47:30.181119] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76432 ] 00:20:45.515 [2024-11-20 16:47:30.340717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.774 [2024-11-20 16:47:30.441550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.032 [2024-11-20 16:47:30.692704] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:46.032 [2024-11-20 16:47:30.692766] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:46.032 [2024-11-20 16:47:30.846183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.032 [2024-11-20 16:47:30.846250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:46.032 [2024-11-20 16:47:30.846263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:46.032 [2024-11-20 16:47:30.846271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.032 [2024-11-20 16:47:30.849045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.032 [2024-11-20 16:47:30.849081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:46.032 [2024-11-20 16:47:30.849091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.755 ms 00:20:46.032 [2024-11-20 16:47:30.849098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.032 [2024-11-20 16:47:30.849194] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:46.032 [2024-11-20 16:47:30.850004] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:46.032 [2024-11-20 16:47:30.850029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.032 [2024-11-20 16:47:30.850038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:46.032 [2024-11-20 16:47:30.850047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.843 ms 00:20:46.032 [2024-11-20 16:47:30.850054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.032 [2024-11-20 16:47:30.851186] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:46.032 [2024-11-20 16:47:30.863221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.033 [2024-11-20 16:47:30.863263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:46.033 [2024-11-20 16:47:30.863275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.036 ms 00:20:46.033 [2024-11-20 16:47:30.863283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.033 [2024-11-20 16:47:30.863391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.033 [2024-11-20 16:47:30.863403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:46.033 [2024-11-20 16:47:30.863411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:46.033 [2024-11-20 16:47:30.863419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.033 [2024-11-20 16:47:30.868495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.033 [2024-11-20 16:47:30.868651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:46.033 [2024-11-20 16:47:30.868666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.034 ms 00:20:46.033 [2024-11-20 16:47:30.868674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.033 [2024-11-20 16:47:30.868761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.033 [2024-11-20 16:47:30.868771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:46.033 [2024-11-20 16:47:30.868779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:46.033 [2024-11-20 16:47:30.868786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.033 [2024-11-20 16:47:30.868811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.033 [2024-11-20 16:47:30.868821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:46.033 [2024-11-20 16:47:30.868828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:46.033 [2024-11-20 16:47:30.868835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.033 [2024-11-20 16:47:30.868855] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:46.033 [2024-11-20 16:47:30.872089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.033 [2024-11-20 16:47:30.872203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:46.033 [2024-11-20 16:47:30.872217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.238 ms 00:20:46.033 [2024-11-20 16:47:30.872225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.033 [2024-11-20 16:47:30.872259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.033 [2024-11-20 16:47:30.872269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:46.033 [2024-11-20 16:47:30.872277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:46.033 [2024-11-20 16:47:30.872283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.033 [2024-11-20 16:47:30.872300] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:46.033 [2024-11-20 16:47:30.872320] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:46.033 [2024-11-20 16:47:30.872354] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:46.033 [2024-11-20 16:47:30.872368] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:46.033 [2024-11-20 16:47:30.872487] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:46.033 [2024-11-20 16:47:30.872499] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:46.033 [2024-11-20 16:47:30.872509] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:46.033 [2024-11-20 16:47:30.872519] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:46.033 [2024-11-20 16:47:30.872531] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:46.033 [2024-11-20 16:47:30.872538] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:46.033 [2024-11-20 16:47:30.872546] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:46.033 [2024-11-20 16:47:30.872553] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:46.033 [2024-11-20 16:47:30.872560] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:46.033 [2024-11-20 16:47:30.872567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.033 [2024-11-20 16:47:30.872574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:46.033 [2024-11-20 16:47:30.872581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:20:46.033 [2024-11-20 16:47:30.872588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.033 [2024-11-20 16:47:30.872675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.033 [2024-11-20 16:47:30.872683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:46.033 [2024-11-20 16:47:30.872692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:46.033 [2024-11-20 16:47:30.872699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.033 [2024-11-20 16:47:30.872817] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:46.033 [2024-11-20 16:47:30.872828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:46.033 [2024-11-20 16:47:30.872836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:46.033 [2024-11-20 16:47:30.872844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.033 [2024-11-20 16:47:30.872851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:46.033 [2024-11-20 16:47:30.872858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:46.033 [2024-11-20 16:47:30.872864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:46.033 [2024-11-20 16:47:30.872871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:46.033 [2024-11-20 16:47:30.872879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:46.033 [2024-11-20 16:47:30.872885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:46.033 [2024-11-20 16:47:30.872892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:46.033 [2024-11-20 16:47:30.872898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:46.033 [2024-11-20 16:47:30.872905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:46.033 [2024-11-20 16:47:30.872917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:46.033 [2024-11-20 16:47:30.872923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:46.033 [2024-11-20 16:47:30.872930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.033 [2024-11-20 16:47:30.872936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:46.033 [2024-11-20 16:47:30.872943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:46.033 [2024-11-20 16:47:30.872951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.033 [2024-11-20 16:47:30.872958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:46.033 [2024-11-20 16:47:30.872964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:46.033 [2024-11-20 16:47:30.872971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:46.033 [2024-11-20 16:47:30.872977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:46.033 [2024-11-20 16:47:30.872984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:46.033 [2024-11-20 16:47:30.872990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:46.033 [2024-11-20 16:47:30.872996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:46.033 [2024-11-20 16:47:30.873003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:46.033 [2024-11-20 16:47:30.873009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:46.033 [2024-11-20 16:47:30.873016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:46.033 [2024-11-20 16:47:30.873023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:46.033 [2024-11-20 16:47:30.873029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:46.033 [2024-11-20 16:47:30.873035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:46.033 [2024-11-20 16:47:30.873042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:46.033 [2024-11-20 16:47:30.873048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:46.033 [2024-11-20 16:47:30.873054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:46.033 [2024-11-20 16:47:30.873061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:46.033 [2024-11-20 16:47:30.873067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:46.033 [2024-11-20 16:47:30.873074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:46.033 [2024-11-20 16:47:30.873080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:46.033 [2024-11-20 16:47:30.873086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.033 [2024-11-20 16:47:30.873092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:46.033 [2024-11-20 16:47:30.873099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:46.033 [2024-11-20 16:47:30.873105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.033 [2024-11-20 16:47:30.873112] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:46.033 [2024-11-20 16:47:30.873120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:46.033 [2024-11-20 16:47:30.873127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:46.033 [2024-11-20 16:47:30.873135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.033 [2024-11-20 16:47:30.873142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:46.033 [2024-11-20 16:47:30.873149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:46.033 [2024-11-20 16:47:30.873156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:46.033 [2024-11-20 16:47:30.873164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:46.033 [2024-11-20 16:47:30.873170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:46.033 [2024-11-20 16:47:30.873176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:46.033 [2024-11-20 16:47:30.873184] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:46.033 [2024-11-20 16:47:30.873193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:46.034 [2024-11-20 16:47:30.873200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:46.034 [2024-11-20 16:47:30.873208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:46.034 [2024-11-20 16:47:30.873215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:46.034 [2024-11-20 16:47:30.873222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:46.034 [2024-11-20 16:47:30.873228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:46.034 [2024-11-20 16:47:30.873235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:46.034 [2024-11-20 16:47:30.873242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:46.034 [2024-11-20 16:47:30.873248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:46.034 [2024-11-20 16:47:30.873256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:46.034 [2024-11-20 16:47:30.873262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:46.034 [2024-11-20 16:47:30.873269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:46.034 [2024-11-20 16:47:30.873275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:46.034 [2024-11-20 16:47:30.873282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:46.034 [2024-11-20 16:47:30.873289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:46.034 [2024-11-20 16:47:30.873295] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:46.034 [2024-11-20 16:47:30.873304] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:46.034 [2024-11-20 16:47:30.873312] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:46.034 [2024-11-20 16:47:30.873319] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:46.034 [2024-11-20 16:47:30.873326] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:46.034 [2024-11-20 16:47:30.873332] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:46.034 [2024-11-20 16:47:30.873339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.034 [2024-11-20 16:47:30.873346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:46.034 [2024-11-20 16:47:30.873356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.594 ms 00:20:46.034 [2024-11-20 16:47:30.873362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.034 [2024-11-20 16:47:30.899076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.034 [2024-11-20 16:47:30.899235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:46.034 [2024-11-20 16:47:30.899251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.625 ms 00:20:46.034 [2024-11-20 16:47:30.899259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.034 [2024-11-20 16:47:30.899409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.034 [2024-11-20 16:47:30.899424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:46.034 [2024-11-20 16:47:30.899432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:20:46.034 [2024-11-20 16:47:30.899440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.293 [2024-11-20 16:47:30.939442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.293 [2024-11-20 16:47:30.939493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:46.293 [2024-11-20 16:47:30.939505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.979 ms 00:20:46.293 [2024-11-20 16:47:30.939517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.293 [2024-11-20 16:47:30.939631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.293 [2024-11-20 16:47:30.939643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:46.293 [2024-11-20 16:47:30.939652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:46.293 [2024-11-20 16:47:30.939659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.293 [2024-11-20 16:47:30.939986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.293 [2024-11-20 16:47:30.940001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:46.293 [2024-11-20 16:47:30.940010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:20:46.293 [2024-11-20 16:47:30.940023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.293 [2024-11-20 16:47:30.940151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.293 [2024-11-20 16:47:30.940160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:46.293 [2024-11-20 16:47:30.940168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:20:46.293 [2024-11-20 16:47:30.940175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.293 [2024-11-20 16:47:30.953464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.293 [2024-11-20 16:47:30.953611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:46.293 [2024-11-20 16:47:30.953629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.269 ms 00:20:46.293 [2024-11-20 16:47:30.953637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.293 [2024-11-20 16:47:30.965994] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:46.293 [2024-11-20 16:47:30.966028] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:46.293 [2024-11-20 16:47:30.966040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.293 [2024-11-20 16:47:30.966048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:46.293 [2024-11-20 16:47:30.966057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.294 ms 00:20:46.293 [2024-11-20 16:47:30.966064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.293 [2024-11-20 16:47:30.989991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.293 [2024-11-20 16:47:30.990030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:46.293 [2024-11-20 16:47:30.990048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.854 ms 00:20:46.293 [2024-11-20 16:47:30.990057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.293 [2024-11-20 16:47:31.001463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.293 [2024-11-20 16:47:31.001493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:46.293 [2024-11-20 16:47:31.001502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.329 ms 00:20:46.293 [2024-11-20 16:47:31.001510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.293 [2024-11-20 16:47:31.012831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.293 [2024-11-20 16:47:31.012956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:46.293 [2024-11-20 16:47:31.012972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.258 ms 00:20:46.293 [2024-11-20 16:47:31.012979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.293 [2024-11-20 16:47:31.013616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.293 [2024-11-20 16:47:31.013641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:46.293 [2024-11-20 16:47:31.013650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:20:46.293 [2024-11-20 16:47:31.013657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.293 [2024-11-20 16:47:31.067628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.293 [2024-11-20 16:47:31.067687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:46.293 [2024-11-20 16:47:31.067700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.945 ms 00:20:46.293 [2024-11-20 16:47:31.067707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.293 [2024-11-20 16:47:31.078530] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:46.293 [2024-11-20 16:47:31.093686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.293 [2024-11-20 16:47:31.093733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:46.293 [2024-11-20 16:47:31.093745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.847 ms 00:20:46.293 [2024-11-20 16:47:31.093753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.293 [2024-11-20 16:47:31.093854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.293 [2024-11-20 16:47:31.093867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:46.293 [2024-11-20 16:47:31.093876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:46.293 [2024-11-20 16:47:31.093884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.293 [2024-11-20 16:47:31.093942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.293 [2024-11-20 16:47:31.093952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:46.293 [2024-11-20 16:47:31.093961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:46.293 [2024-11-20 16:47:31.093968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.293 [2024-11-20 16:47:31.093992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.293 [2024-11-20 16:47:31.094000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:46.293 [2024-11-20 16:47:31.094010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:46.293 [2024-11-20 16:47:31.094017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.293 [2024-11-20 16:47:31.094049] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:46.293 [2024-11-20 16:47:31.094059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.293 [2024-11-20 16:47:31.094066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:46.293 [2024-11-20 16:47:31.094074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:46.293 [2024-11-20 16:47:31.094080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.293 [2024-11-20 16:47:31.117431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.293 [2024-11-20 16:47:31.117488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:46.293 [2024-11-20 16:47:31.117501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.331 ms 00:20:46.293 [2024-11-20 16:47:31.117509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.293 [2024-11-20 16:47:31.117625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.293 [2024-11-20 16:47:31.117637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:46.293 [2024-11-20 16:47:31.117645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:46.293 [2024-11-20 16:47:31.117653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.293 [2024-11-20 16:47:31.118570] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:46.293 [2024-11-20 16:47:31.121692] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 272.095 ms, result 0 00:20:46.293 [2024-11-20 16:47:31.122355] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:46.293 [2024-11-20 16:47:31.135361] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:47.667  [2024-11-20T16:47:33.488Z] Copying: 43/256 [MB] (43 MBps) [2024-11-20T16:47:34.421Z] Copying: 85/256 [MB] (42 MBps) [2024-11-20T16:47:35.355Z] Copying: 130/256 [MB] (44 MBps) [2024-11-20T16:47:36.289Z] Copying: 173/256 [MB] (42 MBps) [2024-11-20T16:47:37.224Z] Copying: 217/256 [MB] (44 MBps) [2024-11-20T16:47:37.224Z] Copying: 256/256 [MB] (average 43 MBps)[2024-11-20 16:47:37.028974] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:52.338 [2024-11-20 16:47:37.039592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.339 [2024-11-20 16:47:37.039755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:52.339 [2024-11-20 16:47:37.039774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:52.339 [2024-11-20 16:47:37.039783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.339 [2024-11-20 16:47:37.039808] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:52.339 [2024-11-20 16:47:37.042406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.339 [2024-11-20 16:47:37.042440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:52.339 [2024-11-20 16:47:37.042451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.584 ms 00:20:52.339 [2024-11-20 16:47:37.042459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.339 [2024-11-20 16:47:37.044293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.339 [2024-11-20 16:47:37.044341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:52.339 [2024-11-20 16:47:37.044358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.810 ms 00:20:52.339 [2024-11-20 16:47:37.044371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.339 [2024-11-20 16:47:37.051514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.339 [2024-11-20 16:47:37.051547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:52.339 [2024-11-20 16:47:37.051562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.103 ms 00:20:52.339 [2024-11-20 16:47:37.051570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.339 [2024-11-20 16:47:37.058485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.339 [2024-11-20 16:47:37.058609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:52.339 [2024-11-20 16:47:37.058625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.870 ms 00:20:52.339 [2024-11-20 16:47:37.058634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.339 [2024-11-20 16:47:37.081596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.339 [2024-11-20 16:47:37.081733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:52.339 [2024-11-20 16:47:37.081750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.918 ms 00:20:52.339 [2024-11-20 16:47:37.081757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.339 [2024-11-20 16:47:37.095528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.339 [2024-11-20 16:47:37.095562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:52.339 [2024-11-20 16:47:37.095580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.737 ms 00:20:52.339 [2024-11-20 16:47:37.095590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.339 [2024-11-20 16:47:37.095723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.339 [2024-11-20 16:47:37.095734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:52.339 [2024-11-20 16:47:37.095743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:20:52.339 [2024-11-20 16:47:37.095750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.339 [2024-11-20 16:47:37.118719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.339 [2024-11-20 16:47:37.118750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:52.339 [2024-11-20 16:47:37.118760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.953 ms 00:20:52.339 [2024-11-20 16:47:37.118768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.339 [2024-11-20 16:47:37.140944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.339 [2024-11-20 16:47:37.141069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:52.339 [2024-11-20 16:47:37.141084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.143 ms 00:20:52.339 [2024-11-20 16:47:37.141091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.339 [2024-11-20 16:47:37.162425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.339 [2024-11-20 16:47:37.162454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:52.339 [2024-11-20 16:47:37.162465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.301 ms 00:20:52.339 [2024-11-20 16:47:37.162472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.339 [2024-11-20 16:47:37.184144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.339 [2024-11-20 16:47:37.184268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:52.339 [2024-11-20 16:47:37.184283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.613 ms 00:20:52.339 [2024-11-20 16:47:37.184290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.339 [2024-11-20 16:47:37.184346] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:52.339 [2024-11-20 16:47:37.184366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:52.339 [2024-11-20 16:47:37.184702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.184998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.185006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.185013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.185021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.185028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.185034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.185041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.185048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.185055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.185063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.185071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.185078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.185091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.185098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.185106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.185113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:52.340 [2024-11-20 16:47:37.185129] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:52.340 [2024-11-20 16:47:37.185137] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b7e88b07-7c78-4b45-9425-1d3115e8b9f8 00:20:52.340 [2024-11-20 16:47:37.185144] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:52.340 [2024-11-20 16:47:37.185152] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:52.340 [2024-11-20 16:47:37.185159] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:52.340 [2024-11-20 16:47:37.185167] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:52.340 [2024-11-20 16:47:37.185174] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:52.340 [2024-11-20 16:47:37.185182] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:52.340 [2024-11-20 16:47:37.185189] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:52.340 [2024-11-20 16:47:37.185196] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:52.340 [2024-11-20 16:47:37.185202] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:52.340 [2024-11-20 16:47:37.185209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.340 [2024-11-20 16:47:37.185216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:52.340 [2024-11-20 16:47:37.185227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.864 ms 00:20:52.340 [2024-11-20 16:47:37.185234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.340 [2024-11-20 16:47:37.197445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.340 [2024-11-20 16:47:37.197476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:52.340 [2024-11-20 16:47:37.197486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.194 ms 00:20:52.340 [2024-11-20 16:47:37.197493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.340 [2024-11-20 16:47:37.197839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.340 [2024-11-20 16:47:37.197852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:52.340 [2024-11-20 16:47:37.197860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:20:52.340 [2024-11-20 16:47:37.197867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.598 [2024-11-20 16:47:37.232248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.598 [2024-11-20 16:47:37.232285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:52.598 [2024-11-20 16:47:37.232295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.598 [2024-11-20 16:47:37.232304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.598 [2024-11-20 16:47:37.232393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.598 [2024-11-20 16:47:37.232407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:52.598 [2024-11-20 16:47:37.232415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.598 [2024-11-20 16:47:37.232423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.599 [2024-11-20 16:47:37.232469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.599 [2024-11-20 16:47:37.232478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:52.599 [2024-11-20 16:47:37.232486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.599 [2024-11-20 16:47:37.232493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.599 [2024-11-20 16:47:37.232510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.599 [2024-11-20 16:47:37.232517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:52.599 [2024-11-20 16:47:37.232528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.599 [2024-11-20 16:47:37.232535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.599 [2024-11-20 16:47:37.310312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.599 [2024-11-20 16:47:37.310359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:52.599 [2024-11-20 16:47:37.310371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.599 [2024-11-20 16:47:37.310395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.599 [2024-11-20 16:47:37.372690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.599 [2024-11-20 16:47:37.372737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:52.599 [2024-11-20 16:47:37.372753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.599 [2024-11-20 16:47:37.372761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.599 [2024-11-20 16:47:37.372815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.599 [2024-11-20 16:47:37.372824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:52.599 [2024-11-20 16:47:37.372832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.599 [2024-11-20 16:47:37.372839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.599 [2024-11-20 16:47:37.372866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.599 [2024-11-20 16:47:37.372873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:52.599 [2024-11-20 16:47:37.372881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.599 [2024-11-20 16:47:37.372890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.599 [2024-11-20 16:47:37.372974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.599 [2024-11-20 16:47:37.372984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:52.599 [2024-11-20 16:47:37.372991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.599 [2024-11-20 16:47:37.372998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.599 [2024-11-20 16:47:37.373027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.599 [2024-11-20 16:47:37.373036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:52.599 [2024-11-20 16:47:37.373043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.599 [2024-11-20 16:47:37.373050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.599 [2024-11-20 16:47:37.373086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.599 [2024-11-20 16:47:37.373095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:52.599 [2024-11-20 16:47:37.373102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.599 [2024-11-20 16:47:37.373109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.599 [2024-11-20 16:47:37.373150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.599 [2024-11-20 16:47:37.373159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:52.599 [2024-11-20 16:47:37.373167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.599 [2024-11-20 16:47:37.373176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.599 [2024-11-20 16:47:37.373298] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 333.705 ms, result 0 00:20:53.972 00:20:53.972 00:20:53.972 16:47:38 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76523 00:20:53.972 16:47:38 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76523 00:20:53.972 16:47:38 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:53.972 16:47:38 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76523 ']' 00:20:53.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.972 16:47:38 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.972 16:47:38 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.972 16:47:38 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.972 16:47:38 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.972 16:47:38 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:53.972 [2024-11-20 16:47:38.595625] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:20:53.972 [2024-11-20 16:47:38.595765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76523 ] 00:20:53.972 [2024-11-20 16:47:38.751775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.972 [2024-11-20 16:47:38.853959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.911 16:47:39 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:54.911 16:47:39 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:54.911 16:47:39 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:54.912 [2024-11-20 16:47:39.641596] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:54.912 [2024-11-20 16:47:39.641655] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:55.170 [2024-11-20 16:47:39.812775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.170 [2024-11-20 16:47:39.812829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:55.170 [2024-11-20 16:47:39.812844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:55.170 [2024-11-20 16:47:39.812852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.170 [2024-11-20 16:47:39.815527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.170 [2024-11-20 16:47:39.815562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:55.170 [2024-11-20 16:47:39.815573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.656 ms 00:20:55.170 [2024-11-20 16:47:39.815580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.170 [2024-11-20 16:47:39.815652] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:55.170 [2024-11-20 16:47:39.816355] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:55.170 [2024-11-20 16:47:39.816396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.170 [2024-11-20 16:47:39.816405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:55.170 [2024-11-20 16:47:39.816416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.753 ms 00:20:55.171 [2024-11-20 16:47:39.816423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.171 [2024-11-20 16:47:39.817520] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:55.171 [2024-11-20 16:47:39.829724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.171 [2024-11-20 16:47:39.829771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:55.171 [2024-11-20 16:47:39.829787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.208 ms 00:20:55.171 [2024-11-20 16:47:39.829798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.171 [2024-11-20 16:47:39.829927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.171 [2024-11-20 16:47:39.829948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:55.171 [2024-11-20 16:47:39.829961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:55.171 [2024-11-20 16:47:39.829970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.171 [2024-11-20 16:47:39.835151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.171 [2024-11-20 16:47:39.835329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:55.171 [2024-11-20 16:47:39.835345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.131 ms 00:20:55.171 [2024-11-20 16:47:39.835355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.171 [2024-11-20 16:47:39.835475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.171 [2024-11-20 16:47:39.835488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:55.171 [2024-11-20 16:47:39.835496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:55.171 [2024-11-20 16:47:39.835505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.171 [2024-11-20 16:47:39.835537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.171 [2024-11-20 16:47:39.835547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:55.171 [2024-11-20 16:47:39.835555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:55.171 [2024-11-20 16:47:39.835563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.171 [2024-11-20 16:47:39.835587] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:55.171 [2024-11-20 16:47:39.838801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.171 [2024-11-20 16:47:39.838831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:55.171 [2024-11-20 16:47:39.838842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.218 ms 00:20:55.171 [2024-11-20 16:47:39.838850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.171 [2024-11-20 16:47:39.838887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.171 [2024-11-20 16:47:39.838895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:55.171 [2024-11-20 16:47:39.838905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:55.171 [2024-11-20 16:47:39.838914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.171 [2024-11-20 16:47:39.838935] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:55.171 [2024-11-20 16:47:39.838951] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:55.171 [2024-11-20 16:47:39.838992] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:55.171 [2024-11-20 16:47:39.839007] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:55.171 [2024-11-20 16:47:39.839111] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:55.171 [2024-11-20 16:47:39.839121] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:55.171 [2024-11-20 16:47:39.839135] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:55.171 [2024-11-20 16:47:39.839146] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:55.171 [2024-11-20 16:47:39.839156] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:55.171 [2024-11-20 16:47:39.839164] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:55.171 [2024-11-20 16:47:39.839173] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:55.171 [2024-11-20 16:47:39.839180] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:55.171 [2024-11-20 16:47:39.839190] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:55.171 [2024-11-20 16:47:39.839198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.171 [2024-11-20 16:47:39.839206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:55.171 [2024-11-20 16:47:39.839214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:20:55.171 [2024-11-20 16:47:39.839222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.171 [2024-11-20 16:47:39.839310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.171 [2024-11-20 16:47:39.839319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:55.171 [2024-11-20 16:47:39.839326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:55.171 [2024-11-20 16:47:39.839335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.171 [2024-11-20 16:47:39.839450] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:55.171 [2024-11-20 16:47:39.839463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:55.171 [2024-11-20 16:47:39.839471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:55.171 [2024-11-20 16:47:39.839480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.171 [2024-11-20 16:47:39.839487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:55.171 [2024-11-20 16:47:39.839495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:55.171 [2024-11-20 16:47:39.839502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:55.171 [2024-11-20 16:47:39.839514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:55.171 [2024-11-20 16:47:39.839521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:55.171 [2024-11-20 16:47:39.839530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:55.171 [2024-11-20 16:47:39.839537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:55.171 [2024-11-20 16:47:39.839545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:55.171 [2024-11-20 16:47:39.839551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:55.171 [2024-11-20 16:47:39.839559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:55.171 [2024-11-20 16:47:39.839565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:55.171 [2024-11-20 16:47:39.839573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.171 [2024-11-20 16:47:39.839580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:55.171 [2024-11-20 16:47:39.839592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:55.171 [2024-11-20 16:47:39.839598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.171 [2024-11-20 16:47:39.839607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:55.171 [2024-11-20 16:47:39.839619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:55.171 [2024-11-20 16:47:39.839627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:55.171 [2024-11-20 16:47:39.839633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:55.171 [2024-11-20 16:47:39.839643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:55.171 [2024-11-20 16:47:39.839649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:55.171 [2024-11-20 16:47:39.839657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:55.171 [2024-11-20 16:47:39.839664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:55.171 [2024-11-20 16:47:39.839673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:55.171 [2024-11-20 16:47:39.839680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:55.171 [2024-11-20 16:47:39.839688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:55.171 [2024-11-20 16:47:39.839694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:55.171 [2024-11-20 16:47:39.839702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:55.171 [2024-11-20 16:47:39.839708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:55.171 [2024-11-20 16:47:39.839718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:55.171 [2024-11-20 16:47:39.839724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:55.171 [2024-11-20 16:47:39.839733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:55.171 [2024-11-20 16:47:39.839739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:55.171 [2024-11-20 16:47:39.839747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:55.171 [2024-11-20 16:47:39.839753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:55.171 [2024-11-20 16:47:39.839762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.171 [2024-11-20 16:47:39.839768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:55.171 [2024-11-20 16:47:39.839776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:55.171 [2024-11-20 16:47:39.839783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.171 [2024-11-20 16:47:39.839790] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:55.171 [2024-11-20 16:47:39.839798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:55.171 [2024-11-20 16:47:39.839808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:55.171 [2024-11-20 16:47:39.839814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.171 [2024-11-20 16:47:39.839823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:55.171 [2024-11-20 16:47:39.839829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:55.171 [2024-11-20 16:47:39.839837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:55.171 [2024-11-20 16:47:39.839845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:55.172 [2024-11-20 16:47:39.839852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:55.172 [2024-11-20 16:47:39.839859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:55.172 [2024-11-20 16:47:39.839868] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:55.172 [2024-11-20 16:47:39.839876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:55.172 [2024-11-20 16:47:39.839888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:55.172 [2024-11-20 16:47:39.839896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:55.172 [2024-11-20 16:47:39.839906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:55.172 [2024-11-20 16:47:39.839913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:55.172 [2024-11-20 16:47:39.839921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:55.172 [2024-11-20 16:47:39.839928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:55.172 [2024-11-20 16:47:39.839937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:55.172 [2024-11-20 16:47:39.839944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:55.172 [2024-11-20 16:47:39.839952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:55.172 [2024-11-20 16:47:39.839959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:55.172 [2024-11-20 16:47:39.839967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:55.172 [2024-11-20 16:47:39.839974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:55.172 [2024-11-20 16:47:39.839982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:55.172 [2024-11-20 16:47:39.839989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:55.172 [2024-11-20 16:47:39.839998] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:55.172 [2024-11-20 16:47:39.840006] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:55.172 [2024-11-20 16:47:39.840016] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:55.172 [2024-11-20 16:47:39.840023] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:55.172 [2024-11-20 16:47:39.840032] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:55.172 [2024-11-20 16:47:39.840039] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:55.172 [2024-11-20 16:47:39.840047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.172 [2024-11-20 16:47:39.840054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:55.172 [2024-11-20 16:47:39.840071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.680 ms 00:20:55.172 [2024-11-20 16:47:39.840078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.172 [2024-11-20 16:47:39.866049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.172 [2024-11-20 16:47:39.866387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:55.172 [2024-11-20 16:47:39.866407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.883 ms 00:20:55.172 [2024-11-20 16:47:39.866415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.172 [2024-11-20 16:47:39.866555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.172 [2024-11-20 16:47:39.866565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:55.172 [2024-11-20 16:47:39.866574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:55.172 [2024-11-20 16:47:39.866582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.172 [2024-11-20 16:47:39.896719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.172 [2024-11-20 16:47:39.896867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:55.172 [2024-11-20 16:47:39.896890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.114 ms 00:20:55.172 [2024-11-20 16:47:39.896898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.172 [2024-11-20 16:47:39.896971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.172 [2024-11-20 16:47:39.896980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:55.172 [2024-11-20 16:47:39.896990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:55.172 [2024-11-20 16:47:39.896997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.172 [2024-11-20 16:47:39.897316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.172 [2024-11-20 16:47:39.897330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:55.172 [2024-11-20 16:47:39.897341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:20:55.172 [2024-11-20 16:47:39.897350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.172 [2024-11-20 16:47:39.897495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.172 [2024-11-20 16:47:39.897505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:55.172 [2024-11-20 16:47:39.897515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:20:55.172 [2024-11-20 16:47:39.897522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.172 [2024-11-20 16:47:39.911866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.172 [2024-11-20 16:47:39.912020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:55.172 [2024-11-20 16:47:39.912039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.320 ms 00:20:55.172 [2024-11-20 16:47:39.912047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.172 [2024-11-20 16:47:39.924591] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:55.172 [2024-11-20 16:47:39.924629] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:55.172 [2024-11-20 16:47:39.924642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.172 [2024-11-20 16:47:39.924650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:55.172 [2024-11-20 16:47:39.924662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.464 ms 00:20:55.172 [2024-11-20 16:47:39.924670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.172 [2024-11-20 16:47:39.948939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.172 [2024-11-20 16:47:39.948992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:55.172 [2024-11-20 16:47:39.949007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.177 ms 00:20:55.172 [2024-11-20 16:47:39.949015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.172 [2024-11-20 16:47:39.961215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.172 [2024-11-20 16:47:39.961250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:55.172 [2024-11-20 16:47:39.961265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.083 ms 00:20:55.172 [2024-11-20 16:47:39.961272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.172 [2024-11-20 16:47:39.972770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.172 [2024-11-20 16:47:39.972802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:55.172 [2024-11-20 16:47:39.972814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.420 ms 00:20:55.172 [2024-11-20 16:47:39.972822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.172 [2024-11-20 16:47:39.973489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.172 [2024-11-20 16:47:39.973513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:55.172 [2024-11-20 16:47:39.973524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:20:55.172 [2024-11-20 16:47:39.973531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.172 [2024-11-20 16:47:40.039759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.172 [2024-11-20 16:47:40.039819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:55.172 [2024-11-20 16:47:40.039836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.199 ms 00:20:55.172 [2024-11-20 16:47:40.039844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.172 [2024-11-20 16:47:40.050554] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:55.431 [2024-11-20 16:47:40.064835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.431 [2024-11-20 16:47:40.064881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:55.431 [2024-11-20 16:47:40.064896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.878 ms 00:20:55.431 [2024-11-20 16:47:40.064905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.431 [2024-11-20 16:47:40.064992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.431 [2024-11-20 16:47:40.065004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:55.431 [2024-11-20 16:47:40.065013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:55.431 [2024-11-20 16:47:40.065023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.431 [2024-11-20 16:47:40.065071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.431 [2024-11-20 16:47:40.065081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:55.431 [2024-11-20 16:47:40.065088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:55.431 [2024-11-20 16:47:40.065097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.431 [2024-11-20 16:47:40.065121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.432 [2024-11-20 16:47:40.065132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:55.432 [2024-11-20 16:47:40.065140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:55.432 [2024-11-20 16:47:40.065151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.432 [2024-11-20 16:47:40.065180] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:55.432 [2024-11-20 16:47:40.065193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.432 [2024-11-20 16:47:40.065200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:55.432 [2024-11-20 16:47:40.065211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:55.432 [2024-11-20 16:47:40.065218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.432 [2024-11-20 16:47:40.088405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.432 [2024-11-20 16:47:40.088443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:55.432 [2024-11-20 16:47:40.088457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.161 ms 00:20:55.432 [2024-11-20 16:47:40.088465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.432 [2024-11-20 16:47:40.088558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.432 [2024-11-20 16:47:40.088568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:55.432 [2024-11-20 16:47:40.088578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:55.432 [2024-11-20 16:47:40.088588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.432 [2024-11-20 16:47:40.089356] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:55.432 [2024-11-20 16:47:40.092523] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 276.315 ms, result 0 00:20:55.432 [2024-11-20 16:47:40.093752] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:55.432 Some configs were skipped because the RPC state that can call them passed over. 00:20:55.432 16:47:40 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:55.690 [2024-11-20 16:47:40.324012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.690 [2024-11-20 16:47:40.324194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:55.690 [2024-11-20 16:47:40.324253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.433 ms 00:20:55.690 [2024-11-20 16:47:40.324280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.690 [2024-11-20 16:47:40.324475] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.891 ms, result 0 00:20:55.690 true 00:20:55.690 16:47:40 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:55.690 [2024-11-20 16:47:40.535913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.690 [2024-11-20 16:47:40.536063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:55.690 [2024-11-20 16:47:40.536121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.112 ms 00:20:55.690 [2024-11-20 16:47:40.536144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.690 [2024-11-20 16:47:40.536198] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.397 ms, result 0 00:20:55.690 true 00:20:55.690 16:47:40 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76523 00:20:55.690 16:47:40 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76523 ']' 00:20:55.690 16:47:40 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76523 00:20:55.690 16:47:40 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:55.690 16:47:40 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:55.690 16:47:40 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76523 00:20:55.947 killing process with pid 76523 00:20:55.947 16:47:40 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:55.947 16:47:40 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:55.947 16:47:40 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76523' 00:20:55.947 16:47:40 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76523 00:20:55.947 16:47:40 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76523 00:20:56.514 [2024-11-20 16:47:41.265721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.514 [2024-11-20 16:47:41.265780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:56.514 [2024-11-20 16:47:41.265793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:56.514 [2024-11-20 16:47:41.265802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.514 [2024-11-20 16:47:41.265824] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:56.515 [2024-11-20 16:47:41.268440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.515 [2024-11-20 16:47:41.268472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:56.515 [2024-11-20 16:47:41.268486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.599 ms 00:20:56.515 [2024-11-20 16:47:41.268495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.515 [2024-11-20 16:47:41.268781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.515 [2024-11-20 16:47:41.268790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:56.515 [2024-11-20 16:47:41.268799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:20:56.515 [2024-11-20 16:47:41.268807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.515 [2024-11-20 16:47:41.272772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.515 [2024-11-20 16:47:41.272803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:56.515 [2024-11-20 16:47:41.272817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.945 ms 00:20:56.515 [2024-11-20 16:47:41.272824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.515 [2024-11-20 16:47:41.279760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.515 [2024-11-20 16:47:41.279921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:56.515 [2024-11-20 16:47:41.279942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.899 ms 00:20:56.515 [2024-11-20 16:47:41.279949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.515 [2024-11-20 16:47:41.289596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.515 [2024-11-20 16:47:41.289634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:56.515 [2024-11-20 16:47:41.289648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.579 ms 00:20:56.515 [2024-11-20 16:47:41.289662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.515 [2024-11-20 16:47:41.299762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.515 [2024-11-20 16:47:41.299798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:56.515 [2024-11-20 16:47:41.299814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.053 ms 00:20:56.515 [2024-11-20 16:47:41.299822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.515 [2024-11-20 16:47:41.299953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.515 [2024-11-20 16:47:41.299963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:56.515 [2024-11-20 16:47:41.299973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:20:56.515 [2024-11-20 16:47:41.299980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.515 [2024-11-20 16:47:41.309357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.515 [2024-11-20 16:47:41.309399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:56.515 [2024-11-20 16:47:41.309411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.354 ms 00:20:56.515 [2024-11-20 16:47:41.309419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.515 [2024-11-20 16:47:41.318506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.515 [2024-11-20 16:47:41.318629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:56.515 [2024-11-20 16:47:41.318649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.046 ms 00:20:56.515 [2024-11-20 16:47:41.318656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.515 [2024-11-20 16:47:41.327420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.515 [2024-11-20 16:47:41.327449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:56.515 [2024-11-20 16:47:41.327464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.711 ms 00:20:56.515 [2024-11-20 16:47:41.327472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.515 [2024-11-20 16:47:41.336421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.515 [2024-11-20 16:47:41.336450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:56.515 [2024-11-20 16:47:41.336461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.882 ms 00:20:56.515 [2024-11-20 16:47:41.336468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.515 [2024-11-20 16:47:41.336502] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:56.515 [2024-11-20 16:47:41.336516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:56.515 [2024-11-20 16:47:41.336938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.336945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.336955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.336962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.336971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.336979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.336987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.336994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:56.516 [2024-11-20 16:47:41.337344] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:56.516 [2024-11-20 16:47:41.337357] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b7e88b07-7c78-4b45-9425-1d3115e8b9f8 00:20:56.516 [2024-11-20 16:47:41.337370] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:56.516 [2024-11-20 16:47:41.337398] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:56.516 [2024-11-20 16:47:41.337405] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:56.516 [2024-11-20 16:47:41.337414] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:56.516 [2024-11-20 16:47:41.337421] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:56.516 [2024-11-20 16:47:41.337430] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:56.516 [2024-11-20 16:47:41.337437] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:56.516 [2024-11-20 16:47:41.337445] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:56.516 [2024-11-20 16:47:41.337452] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:56.516 [2024-11-20 16:47:41.337460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.516 [2024-11-20 16:47:41.337467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:56.516 [2024-11-20 16:47:41.337477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.959 ms 00:20:56.516 [2024-11-20 16:47:41.337484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.516 [2024-11-20 16:47:41.349710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.516 [2024-11-20 16:47:41.349833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:56.516 [2024-11-20 16:47:41.349852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.202 ms 00:20:56.516 [2024-11-20 16:47:41.349860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.516 [2024-11-20 16:47:41.350224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.516 [2024-11-20 16:47:41.350234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:56.516 [2024-11-20 16:47:41.350259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:20:56.516 [2024-11-20 16:47:41.350269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.516 [2024-11-20 16:47:41.393882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.516 [2024-11-20 16:47:41.393924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:56.516 [2024-11-20 16:47:41.393937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.516 [2024-11-20 16:47:41.393944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.516 [2024-11-20 16:47:41.394057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.516 [2024-11-20 16:47:41.394067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:56.516 [2024-11-20 16:47:41.394076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.516 [2024-11-20 16:47:41.394086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.516 [2024-11-20 16:47:41.394135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.516 [2024-11-20 16:47:41.394144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:56.516 [2024-11-20 16:47:41.394155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.516 [2024-11-20 16:47:41.394162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.516 [2024-11-20 16:47:41.394181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.516 [2024-11-20 16:47:41.394189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:56.516 [2024-11-20 16:47:41.394198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.516 [2024-11-20 16:47:41.394205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.774 [2024-11-20 16:47:41.469794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.774 [2024-11-20 16:47:41.469841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:56.774 [2024-11-20 16:47:41.469854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.774 [2024-11-20 16:47:41.469862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.774 [2024-11-20 16:47:41.532962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.774 [2024-11-20 16:47:41.533152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:56.774 [2024-11-20 16:47:41.533172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.774 [2024-11-20 16:47:41.533182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.774 [2024-11-20 16:47:41.533266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.774 [2024-11-20 16:47:41.533275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:56.774 [2024-11-20 16:47:41.533287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.774 [2024-11-20 16:47:41.533295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.774 [2024-11-20 16:47:41.533324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.774 [2024-11-20 16:47:41.533333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:56.774 [2024-11-20 16:47:41.533342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.774 [2024-11-20 16:47:41.533349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.774 [2024-11-20 16:47:41.533469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.774 [2024-11-20 16:47:41.533479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:56.774 [2024-11-20 16:47:41.533489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.774 [2024-11-20 16:47:41.533496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.774 [2024-11-20 16:47:41.533529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.774 [2024-11-20 16:47:41.533537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:56.774 [2024-11-20 16:47:41.533546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.774 [2024-11-20 16:47:41.533553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.774 [2024-11-20 16:47:41.533587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.774 [2024-11-20 16:47:41.533597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:56.774 [2024-11-20 16:47:41.533608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.774 [2024-11-20 16:47:41.533616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.775 [2024-11-20 16:47:41.533656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.775 [2024-11-20 16:47:41.533665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:56.775 [2024-11-20 16:47:41.533674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.775 [2024-11-20 16:47:41.533682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.775 [2024-11-20 16:47:41.533808] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 268.066 ms, result 0 00:20:57.342 16:47:42 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:57.342 16:47:42 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:57.600 [2024-11-20 16:47:42.247372] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:20:57.600 [2024-11-20 16:47:42.247496] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76576 ] 00:20:57.600 [2024-11-20 16:47:42.405947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.858 [2024-11-20 16:47:42.506930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.117 [2024-11-20 16:47:42.760099] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:58.117 [2024-11-20 16:47:42.760160] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:58.117 [2024-11-20 16:47:42.914185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.117 [2024-11-20 16:47:42.914236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:58.117 [2024-11-20 16:47:42.914248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:58.117 [2024-11-20 16:47:42.914257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.117 [2024-11-20 16:47:42.916903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.117 [2024-11-20 16:47:42.917049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:58.117 [2024-11-20 16:47:42.917065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.628 ms 00:20:58.117 [2024-11-20 16:47:42.917073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.117 [2024-11-20 16:47:42.917193] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:58.117 [2024-11-20 16:47:42.917993] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:58.117 [2024-11-20 16:47:42.918095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.117 [2024-11-20 16:47:42.918149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:58.117 [2024-11-20 16:47:42.918173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.911 ms 00:20:58.117 [2024-11-20 16:47:42.918226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.117 [2024-11-20 16:47:42.919349] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:58.117 [2024-11-20 16:47:42.931302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.117 [2024-11-20 16:47:42.931338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:58.117 [2024-11-20 16:47:42.931350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.954 ms 00:20:58.117 [2024-11-20 16:47:42.931358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.117 [2024-11-20 16:47:42.931451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.117 [2024-11-20 16:47:42.931463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:58.117 [2024-11-20 16:47:42.931471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:58.117 [2024-11-20 16:47:42.931478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.117 [2024-11-20 16:47:42.936336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.117 [2024-11-20 16:47:42.936481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:58.117 [2024-11-20 16:47:42.936496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.816 ms 00:20:58.117 [2024-11-20 16:47:42.936504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.117 [2024-11-20 16:47:42.936598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.117 [2024-11-20 16:47:42.936608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:58.117 [2024-11-20 16:47:42.936616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:58.117 [2024-11-20 16:47:42.936623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.117 [2024-11-20 16:47:42.936647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.117 [2024-11-20 16:47:42.936657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:58.117 [2024-11-20 16:47:42.936665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:58.117 [2024-11-20 16:47:42.936672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.117 [2024-11-20 16:47:42.936692] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:58.117 [2024-11-20 16:47:42.939966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.117 [2024-11-20 16:47:42.940081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:58.117 [2024-11-20 16:47:42.940095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.279 ms 00:20:58.117 [2024-11-20 16:47:42.940103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.117 [2024-11-20 16:47:42.940138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.117 [2024-11-20 16:47:42.940147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:58.117 [2024-11-20 16:47:42.940155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:58.117 [2024-11-20 16:47:42.940162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.117 [2024-11-20 16:47:42.940179] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:58.117 [2024-11-20 16:47:42.940199] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:58.117 [2024-11-20 16:47:42.940233] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:58.117 [2024-11-20 16:47:42.940248] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:58.117 [2024-11-20 16:47:42.940349] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:58.117 [2024-11-20 16:47:42.940359] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:58.117 [2024-11-20 16:47:42.940370] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:58.117 [2024-11-20 16:47:42.940390] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:58.117 [2024-11-20 16:47:42.940401] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:58.117 [2024-11-20 16:47:42.940409] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:58.117 [2024-11-20 16:47:42.940417] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:58.117 [2024-11-20 16:47:42.940424] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:58.117 [2024-11-20 16:47:42.940431] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:58.117 [2024-11-20 16:47:42.940439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.117 [2024-11-20 16:47:42.940446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:58.118 [2024-11-20 16:47:42.940454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:20:58.118 [2024-11-20 16:47:42.940461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.118 [2024-11-20 16:47:42.940548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.118 [2024-11-20 16:47:42.940556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:58.118 [2024-11-20 16:47:42.940565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:58.118 [2024-11-20 16:47:42.940572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.118 [2024-11-20 16:47:42.940686] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:58.118 [2024-11-20 16:47:42.940697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:58.118 [2024-11-20 16:47:42.940705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:58.118 [2024-11-20 16:47:42.940712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.118 [2024-11-20 16:47:42.940720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:58.118 [2024-11-20 16:47:42.940726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:58.118 [2024-11-20 16:47:42.940733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:58.118 [2024-11-20 16:47:42.940740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:58.118 [2024-11-20 16:47:42.940748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:58.118 [2024-11-20 16:47:42.940754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:58.118 [2024-11-20 16:47:42.940761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:58.118 [2024-11-20 16:47:42.940768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:58.118 [2024-11-20 16:47:42.940774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:58.118 [2024-11-20 16:47:42.940787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:58.118 [2024-11-20 16:47:42.940793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:58.118 [2024-11-20 16:47:42.940799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.118 [2024-11-20 16:47:42.940807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:58.118 [2024-11-20 16:47:42.940814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:58.118 [2024-11-20 16:47:42.940820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.118 [2024-11-20 16:47:42.940827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:58.118 [2024-11-20 16:47:42.940833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:58.118 [2024-11-20 16:47:42.940840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:58.118 [2024-11-20 16:47:42.940846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:58.118 [2024-11-20 16:47:42.940853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:58.118 [2024-11-20 16:47:42.940859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:58.118 [2024-11-20 16:47:42.940866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:58.118 [2024-11-20 16:47:42.940873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:58.118 [2024-11-20 16:47:42.940879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:58.118 [2024-11-20 16:47:42.940885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:58.118 [2024-11-20 16:47:42.940892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:58.118 [2024-11-20 16:47:42.940898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:58.118 [2024-11-20 16:47:42.940905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:58.118 [2024-11-20 16:47:42.940912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:58.118 [2024-11-20 16:47:42.940918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:58.118 [2024-11-20 16:47:42.940924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:58.118 [2024-11-20 16:47:42.940930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:58.118 [2024-11-20 16:47:42.940937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:58.118 [2024-11-20 16:47:42.940943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:58.118 [2024-11-20 16:47:42.940950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:58.118 [2024-11-20 16:47:42.940956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.118 [2024-11-20 16:47:42.940962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:58.118 [2024-11-20 16:47:42.940968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:58.118 [2024-11-20 16:47:42.940975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.118 [2024-11-20 16:47:42.940981] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:58.118 [2024-11-20 16:47:42.940989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:58.118 [2024-11-20 16:47:42.940997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:58.118 [2024-11-20 16:47:42.941005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.118 [2024-11-20 16:47:42.941012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:58.118 [2024-11-20 16:47:42.941019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:58.118 [2024-11-20 16:47:42.941025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:58.118 [2024-11-20 16:47:42.941032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:58.118 [2024-11-20 16:47:42.941039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:58.118 [2024-11-20 16:47:42.941045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:58.118 [2024-11-20 16:47:42.941054] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:58.118 [2024-11-20 16:47:42.941063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:58.118 [2024-11-20 16:47:42.941072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:58.118 [2024-11-20 16:47:42.941079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:58.118 [2024-11-20 16:47:42.941086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:58.118 [2024-11-20 16:47:42.941093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:58.118 [2024-11-20 16:47:42.941099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:58.118 [2024-11-20 16:47:42.941106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:58.118 [2024-11-20 16:47:42.941113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:58.118 [2024-11-20 16:47:42.941119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:58.118 [2024-11-20 16:47:42.941126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:58.118 [2024-11-20 16:47:42.941133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:58.118 [2024-11-20 16:47:42.941139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:58.118 [2024-11-20 16:47:42.941146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:58.118 [2024-11-20 16:47:42.941153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:58.118 [2024-11-20 16:47:42.941160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:58.118 [2024-11-20 16:47:42.941167] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:58.118 [2024-11-20 16:47:42.941174] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:58.118 [2024-11-20 16:47:42.941182] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:58.118 [2024-11-20 16:47:42.941189] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:58.118 [2024-11-20 16:47:42.941195] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:58.118 [2024-11-20 16:47:42.941202] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:58.118 [2024-11-20 16:47:42.941209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.118 [2024-11-20 16:47:42.941216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:58.118 [2024-11-20 16:47:42.941226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:20:58.118 [2024-11-20 16:47:42.941232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.118 [2024-11-20 16:47:42.966936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.118 [2024-11-20 16:47:42.967064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:58.118 [2024-11-20 16:47:42.967079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.656 ms 00:20:58.118 [2024-11-20 16:47:42.967087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.118 [2024-11-20 16:47:42.967205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.118 [2024-11-20 16:47:42.967219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:58.118 [2024-11-20 16:47:42.967227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:20:58.118 [2024-11-20 16:47:42.967234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.377 [2024-11-20 16:47:43.014919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.377 [2024-11-20 16:47:43.014973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:58.377 [2024-11-20 16:47:43.014985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.663 ms 00:20:58.377 [2024-11-20 16:47:43.014996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.377 [2024-11-20 16:47:43.015095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.377 [2024-11-20 16:47:43.015107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:58.377 [2024-11-20 16:47:43.015116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:58.377 [2024-11-20 16:47:43.015123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.377 [2024-11-20 16:47:43.015466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.377 [2024-11-20 16:47:43.015486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:58.377 [2024-11-20 16:47:43.015496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:20:58.377 [2024-11-20 16:47:43.015509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.377 [2024-11-20 16:47:43.015634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.377 [2024-11-20 16:47:43.015643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:58.377 [2024-11-20 16:47:43.015651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:20:58.377 [2024-11-20 16:47:43.015658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.377 [2024-11-20 16:47:43.028963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.377 [2024-11-20 16:47:43.028991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:58.377 [2024-11-20 16:47:43.029001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.286 ms 00:20:58.377 [2024-11-20 16:47:43.029008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.377 [2024-11-20 16:47:43.041195] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:58.377 [2024-11-20 16:47:43.041228] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:58.377 [2024-11-20 16:47:43.041240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.377 [2024-11-20 16:47:43.041249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:58.377 [2024-11-20 16:47:43.041258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.137 ms 00:20:58.378 [2024-11-20 16:47:43.041265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.378 [2024-11-20 16:47:43.065012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.378 [2024-11-20 16:47:43.065056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:58.378 [2024-11-20 16:47:43.065068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.673 ms 00:20:58.378 [2024-11-20 16:47:43.065076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.378 [2024-11-20 16:47:43.076244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.378 [2024-11-20 16:47:43.076276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:58.378 [2024-11-20 16:47:43.076286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.093 ms 00:20:58.378 [2024-11-20 16:47:43.076293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.378 [2024-11-20 16:47:43.087151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.378 [2024-11-20 16:47:43.087179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:58.378 [2024-11-20 16:47:43.087188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.800 ms 00:20:58.378 [2024-11-20 16:47:43.087195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.378 [2024-11-20 16:47:43.087820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.378 [2024-11-20 16:47:43.087843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:58.378 [2024-11-20 16:47:43.087852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:20:58.378 [2024-11-20 16:47:43.087859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.378 [2024-11-20 16:47:43.141279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.378 [2024-11-20 16:47:43.141330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:58.378 [2024-11-20 16:47:43.141345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.397 ms 00:20:58.378 [2024-11-20 16:47:43.141353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.378 [2024-11-20 16:47:43.151664] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:58.378 [2024-11-20 16:47:43.165431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.378 [2024-11-20 16:47:43.165465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:58.378 [2024-11-20 16:47:43.165477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.826 ms 00:20:58.378 [2024-11-20 16:47:43.165485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.378 [2024-11-20 16:47:43.165570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.378 [2024-11-20 16:47:43.165581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:58.378 [2024-11-20 16:47:43.165589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:58.378 [2024-11-20 16:47:43.165597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.378 [2024-11-20 16:47:43.165640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.378 [2024-11-20 16:47:43.165648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:58.378 [2024-11-20 16:47:43.165656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:58.378 [2024-11-20 16:47:43.165663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.378 [2024-11-20 16:47:43.165689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.378 [2024-11-20 16:47:43.165699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:58.378 [2024-11-20 16:47:43.165707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:58.378 [2024-11-20 16:47:43.165715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.378 [2024-11-20 16:47:43.165742] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:58.378 [2024-11-20 16:47:43.165751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.378 [2024-11-20 16:47:43.165758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:58.378 [2024-11-20 16:47:43.165766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:58.378 [2024-11-20 16:47:43.165773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.378 [2024-11-20 16:47:43.188062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.378 [2024-11-20 16:47:43.188094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:58.378 [2024-11-20 16:47:43.188106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.269 ms 00:20:58.378 [2024-11-20 16:47:43.188114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.378 [2024-11-20 16:47:43.188200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.378 [2024-11-20 16:47:43.188211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:58.378 [2024-11-20 16:47:43.188219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:58.378 [2024-11-20 16:47:43.188227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.378 [2024-11-20 16:47:43.189417] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:58.378 [2024-11-20 16:47:43.192273] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 274.941 ms, result 0 00:20:58.378 [2024-11-20 16:47:43.192951] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:58.378 [2024-11-20 16:47:43.205607] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:59.751  [2024-11-20T16:47:45.570Z] Copying: 45/256 [MB] (45 MBps) [2024-11-20T16:47:46.504Z] Copying: 88/256 [MB] (42 MBps) [2024-11-20T16:47:47.437Z] Copying: 130/256 [MB] (41 MBps) [2024-11-20T16:47:48.366Z] Copying: 173/256 [MB] (43 MBps) [2024-11-20T16:47:49.299Z] Copying: 216/256 [MB] (42 MBps) [2024-11-20T16:47:49.299Z] Copying: 256/256 [MB] (average 43 MBps)[2024-11-20 16:47:49.133885] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:04.413 [2024-11-20 16:47:49.143428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.413 [2024-11-20 16:47:49.143471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:04.413 [2024-11-20 16:47:49.143486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:04.413 [2024-11-20 16:47:49.143498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.413 [2024-11-20 16:47:49.143523] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:04.413 [2024-11-20 16:47:49.146092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.413 [2024-11-20 16:47:49.146124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:04.413 [2024-11-20 16:47:49.146135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.555 ms 00:21:04.413 [2024-11-20 16:47:49.146144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.413 [2024-11-20 16:47:49.146416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.413 [2024-11-20 16:47:49.146426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:04.413 [2024-11-20 16:47:49.146436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:21:04.413 [2024-11-20 16:47:49.146443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.413 [2024-11-20 16:47:49.150144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.413 [2024-11-20 16:47:49.150186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:04.413 [2024-11-20 16:47:49.150196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.685 ms 00:21:04.413 [2024-11-20 16:47:49.150206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.413 [2024-11-20 16:47:49.157206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.413 [2024-11-20 16:47:49.157351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:04.413 [2024-11-20 16:47:49.157369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.981 ms 00:21:04.413 [2024-11-20 16:47:49.157388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.413 [2024-11-20 16:47:49.180749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.413 [2024-11-20 16:47:49.180790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:04.413 [2024-11-20 16:47:49.180802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.300 ms 00:21:04.413 [2024-11-20 16:47:49.180809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.413 [2024-11-20 16:47:49.194834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.414 [2024-11-20 16:47:49.194882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:04.414 [2024-11-20 16:47:49.194895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.982 ms 00:21:04.414 [2024-11-20 16:47:49.194905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.414 [2024-11-20 16:47:49.195044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.414 [2024-11-20 16:47:49.195055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:04.414 [2024-11-20 16:47:49.195063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:21:04.414 [2024-11-20 16:47:49.195070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.414 [2024-11-20 16:47:49.217842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.414 [2024-11-20 16:47:49.217884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:04.414 [2024-11-20 16:47:49.217895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.747 ms 00:21:04.414 [2024-11-20 16:47:49.217902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.414 [2024-11-20 16:47:49.240297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.414 [2024-11-20 16:47:49.240488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:04.414 [2024-11-20 16:47:49.240507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.355 ms 00:21:04.414 [2024-11-20 16:47:49.240514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.414 [2024-11-20 16:47:49.262645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.414 [2024-11-20 16:47:49.262683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:04.414 [2024-11-20 16:47:49.262695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.092 ms 00:21:04.414 [2024-11-20 16:47:49.262702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.414 [2024-11-20 16:47:49.284858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.414 [2024-11-20 16:47:49.284897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:04.414 [2024-11-20 16:47:49.284909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.089 ms 00:21:04.414 [2024-11-20 16:47:49.284917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.414 [2024-11-20 16:47:49.284953] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:04.414 [2024-11-20 16:47:49.284969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.284979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.284986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.284994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:04.414 [2024-11-20 16:47:49.285524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:04.415 [2024-11-20 16:47:49.285532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:04.415 [2024-11-20 16:47:49.285540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:04.415 [2024-11-20 16:47:49.285547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:04.415 [2024-11-20 16:47:49.285555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:04.415 [2024-11-20 16:47:49.285562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:04.415 [2024-11-20 16:47:49.285569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:04.415 [2024-11-20 16:47:49.285576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:04.415 [2024-11-20 16:47:49.285583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:04.415 [2024-11-20 16:47:49.285591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:04.415 [2024-11-20 16:47:49.285598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:04.415 [2024-11-20 16:47:49.285606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:04.415 [2024-11-20 16:47:49.285613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:04.415 [2024-11-20 16:47:49.285621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:04.415 [2024-11-20 16:47:49.285628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:04.415 [2024-11-20 16:47:49.285636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:04.415 [2024-11-20 16:47:49.285643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:04.415 [2024-11-20 16:47:49.285650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:04.415 [2024-11-20 16:47:49.285658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:04.415 [2024-11-20 16:47:49.285665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:04.415 [2024-11-20 16:47:49.285673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:04.415 [2024-11-20 16:47:49.285680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:04.415 [2024-11-20 16:47:49.285687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:04.415 [2024-11-20 16:47:49.285695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:04.415 [2024-11-20 16:47:49.285704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:04.415 [2024-11-20 16:47:49.285712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:04.415 [2024-11-20 16:47:49.285727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:04.415 [2024-11-20 16:47:49.285734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:04.415 [2024-11-20 16:47:49.285742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:04.415 [2024-11-20 16:47:49.285749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:04.415 [2024-11-20 16:47:49.285757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:04.415 [2024-11-20 16:47:49.285773] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:04.415 [2024-11-20 16:47:49.285781] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b7e88b07-7c78-4b45-9425-1d3115e8b9f8 00:21:04.415 [2024-11-20 16:47:49.285788] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:04.415 [2024-11-20 16:47:49.285796] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:04.415 [2024-11-20 16:47:49.285803] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:04.415 [2024-11-20 16:47:49.285811] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:04.415 [2024-11-20 16:47:49.285818] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:04.415 [2024-11-20 16:47:49.285825] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:04.415 [2024-11-20 16:47:49.285832] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:04.415 [2024-11-20 16:47:49.285838] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:04.415 [2024-11-20 16:47:49.285845] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:04.415 [2024-11-20 16:47:49.285851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.415 [2024-11-20 16:47:49.285862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:04.415 [2024-11-20 16:47:49.285870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.899 ms 00:21:04.415 [2024-11-20 16:47:49.285878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.719 [2024-11-20 16:47:49.298193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.719 [2024-11-20 16:47:49.298227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:04.719 [2024-11-20 16:47:49.298238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.298 ms 00:21:04.720 [2024-11-20 16:47:49.298246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.720 [2024-11-20 16:47:49.298641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.720 [2024-11-20 16:47:49.298656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:04.720 [2024-11-20 16:47:49.298664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:21:04.720 [2024-11-20 16:47:49.298671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.720 [2024-11-20 16:47:49.333404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.720 [2024-11-20 16:47:49.333452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:04.720 [2024-11-20 16:47:49.333463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.720 [2024-11-20 16:47:49.333470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.720 [2024-11-20 16:47:49.333558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.720 [2024-11-20 16:47:49.333568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:04.720 [2024-11-20 16:47:49.333575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.720 [2024-11-20 16:47:49.333582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.720 [2024-11-20 16:47:49.333631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.720 [2024-11-20 16:47:49.333640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:04.720 [2024-11-20 16:47:49.333648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.720 [2024-11-20 16:47:49.333655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.720 [2024-11-20 16:47:49.333672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.720 [2024-11-20 16:47:49.333683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:04.720 [2024-11-20 16:47:49.333691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.720 [2024-11-20 16:47:49.333698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.720 [2024-11-20 16:47:49.409868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.720 [2024-11-20 16:47:49.409920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:04.720 [2024-11-20 16:47:49.409931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.720 [2024-11-20 16:47:49.409940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.720 [2024-11-20 16:47:49.472371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.720 [2024-11-20 16:47:49.472442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:04.720 [2024-11-20 16:47:49.472452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.720 [2024-11-20 16:47:49.472461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.720 [2024-11-20 16:47:49.472512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.720 [2024-11-20 16:47:49.472521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:04.720 [2024-11-20 16:47:49.472529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.720 [2024-11-20 16:47:49.472536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.720 [2024-11-20 16:47:49.472564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.720 [2024-11-20 16:47:49.472572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:04.720 [2024-11-20 16:47:49.472583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.720 [2024-11-20 16:47:49.472590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.720 [2024-11-20 16:47:49.472675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.720 [2024-11-20 16:47:49.472684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:04.720 [2024-11-20 16:47:49.472692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.720 [2024-11-20 16:47:49.472699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.720 [2024-11-20 16:47:49.472730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.720 [2024-11-20 16:47:49.472739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:04.720 [2024-11-20 16:47:49.472746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.720 [2024-11-20 16:47:49.472756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.720 [2024-11-20 16:47:49.472790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.720 [2024-11-20 16:47:49.472798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:04.720 [2024-11-20 16:47:49.472805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.720 [2024-11-20 16:47:49.472812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.720 [2024-11-20 16:47:49.472853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.720 [2024-11-20 16:47:49.472862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:04.720 [2024-11-20 16:47:49.472872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.720 [2024-11-20 16:47:49.472879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.720 [2024-11-20 16:47:49.473003] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 329.611 ms, result 0 00:21:05.306 00:21:05.306 00:21:05.306 16:47:50 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:21:05.306 16:47:50 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:05.872 16:47:50 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:06.130 [2024-11-20 16:47:50.766238] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:21:06.130 [2024-11-20 16:47:50.766501] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76665 ] 00:21:06.130 [2024-11-20 16:47:50.924586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.398 [2024-11-20 16:47:51.025569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.398 [2024-11-20 16:47:51.282203] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:06.398 [2024-11-20 16:47:51.282271] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:06.657 [2024-11-20 16:47:51.436286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.657 [2024-11-20 16:47:51.436345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:06.657 [2024-11-20 16:47:51.436358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:06.657 [2024-11-20 16:47:51.436366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.657 [2024-11-20 16:47:51.439066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.657 [2024-11-20 16:47:51.439101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:06.657 [2024-11-20 16:47:51.439112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.664 ms 00:21:06.657 [2024-11-20 16:47:51.439119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.657 [2024-11-20 16:47:51.439189] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:06.657 [2024-11-20 16:47:51.439871] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:06.657 [2024-11-20 16:47:51.439896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.657 [2024-11-20 16:47:51.439904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:06.657 [2024-11-20 16:47:51.439912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.713 ms 00:21:06.657 [2024-11-20 16:47:51.439920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.657 [2024-11-20 16:47:51.441108] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:06.657 [2024-11-20 16:47:51.453635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.657 [2024-11-20 16:47:51.453674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:06.657 [2024-11-20 16:47:51.453686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.528 ms 00:21:06.657 [2024-11-20 16:47:51.453694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.657 [2024-11-20 16:47:51.453788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.657 [2024-11-20 16:47:51.453800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:06.657 [2024-11-20 16:47:51.453809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:21:06.657 [2024-11-20 16:47:51.453816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.657 [2024-11-20 16:47:51.459124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.657 [2024-11-20 16:47:51.459155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:06.657 [2024-11-20 16:47:51.459166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.267 ms 00:21:06.657 [2024-11-20 16:47:51.459174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.657 [2024-11-20 16:47:51.459262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.657 [2024-11-20 16:47:51.459271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:06.657 [2024-11-20 16:47:51.459279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:06.657 [2024-11-20 16:47:51.459286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.657 [2024-11-20 16:47:51.459311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.657 [2024-11-20 16:47:51.459322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:06.657 [2024-11-20 16:47:51.459329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:06.657 [2024-11-20 16:47:51.459336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.657 [2024-11-20 16:47:51.459357] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:06.657 [2024-11-20 16:47:51.462603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.657 [2024-11-20 16:47:51.462630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:06.657 [2024-11-20 16:47:51.462641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.251 ms 00:21:06.657 [2024-11-20 16:47:51.462650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.657 [2024-11-20 16:47:51.462686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.657 [2024-11-20 16:47:51.462695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:06.657 [2024-11-20 16:47:51.462704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:06.657 [2024-11-20 16:47:51.462711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.657 [2024-11-20 16:47:51.462729] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:06.657 [2024-11-20 16:47:51.462749] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:06.657 [2024-11-20 16:47:51.462783] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:06.657 [2024-11-20 16:47:51.462798] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:06.657 [2024-11-20 16:47:51.462899] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:06.657 [2024-11-20 16:47:51.462909] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:06.657 [2024-11-20 16:47:51.462919] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:06.657 [2024-11-20 16:47:51.462929] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:06.657 [2024-11-20 16:47:51.462940] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:06.657 [2024-11-20 16:47:51.462948] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:06.657 [2024-11-20 16:47:51.462955] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:06.657 [2024-11-20 16:47:51.462962] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:06.657 [2024-11-20 16:47:51.462970] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:06.657 [2024-11-20 16:47:51.462977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.657 [2024-11-20 16:47:51.462984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:06.657 [2024-11-20 16:47:51.462992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:21:06.657 [2024-11-20 16:47:51.463002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.657 [2024-11-20 16:47:51.463112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.657 [2024-11-20 16:47:51.463123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:06.657 [2024-11-20 16:47:51.463135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:21:06.657 [2024-11-20 16:47:51.463145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.657 [2024-11-20 16:47:51.463270] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:06.657 [2024-11-20 16:47:51.463282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:06.657 [2024-11-20 16:47:51.463293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:06.657 [2024-11-20 16:47:51.463302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:06.657 [2024-11-20 16:47:51.463312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:06.657 [2024-11-20 16:47:51.463321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:06.657 [2024-11-20 16:47:51.463330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:06.657 [2024-11-20 16:47:51.463340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:06.657 [2024-11-20 16:47:51.463349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:06.657 [2024-11-20 16:47:51.463358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:06.657 [2024-11-20 16:47:51.463367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:06.657 [2024-11-20 16:47:51.463404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:06.657 [2024-11-20 16:47:51.463414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:06.657 [2024-11-20 16:47:51.463430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:06.657 [2024-11-20 16:47:51.463440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:06.657 [2024-11-20 16:47:51.463449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:06.657 [2024-11-20 16:47:51.463457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:06.657 [2024-11-20 16:47:51.463467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:06.657 [2024-11-20 16:47:51.463476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:06.657 [2024-11-20 16:47:51.463484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:06.657 [2024-11-20 16:47:51.463493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:06.657 [2024-11-20 16:47:51.463502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:06.657 [2024-11-20 16:47:51.463510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:06.657 [2024-11-20 16:47:51.463519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:06.657 [2024-11-20 16:47:51.463528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:06.657 [2024-11-20 16:47:51.463536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:06.657 [2024-11-20 16:47:51.463545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:06.657 [2024-11-20 16:47:51.463553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:06.657 [2024-11-20 16:47:51.463562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:06.657 [2024-11-20 16:47:51.463570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:06.657 [2024-11-20 16:47:51.463579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:06.657 [2024-11-20 16:47:51.463587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:06.657 [2024-11-20 16:47:51.463596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:06.657 [2024-11-20 16:47:51.463604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:06.657 [2024-11-20 16:47:51.463613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:06.657 [2024-11-20 16:47:51.463621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:06.657 [2024-11-20 16:47:51.463630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:06.657 [2024-11-20 16:47:51.463638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:06.657 [2024-11-20 16:47:51.463649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:06.657 [2024-11-20 16:47:51.463658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:06.657 [2024-11-20 16:47:51.463666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:06.657 [2024-11-20 16:47:51.463676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:06.657 [2024-11-20 16:47:51.463685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:06.657 [2024-11-20 16:47:51.463694] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:06.657 [2024-11-20 16:47:51.463704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:06.657 [2024-11-20 16:47:51.463713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:06.657 [2024-11-20 16:47:51.463725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:06.657 [2024-11-20 16:47:51.463734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:06.657 [2024-11-20 16:47:51.463743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:06.657 [2024-11-20 16:47:51.463752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:06.657 [2024-11-20 16:47:51.463761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:06.657 [2024-11-20 16:47:51.463769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:06.657 [2024-11-20 16:47:51.463778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:06.657 [2024-11-20 16:47:51.463789] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:06.657 [2024-11-20 16:47:51.463800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:06.657 [2024-11-20 16:47:51.463811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:06.657 [2024-11-20 16:47:51.463821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:06.657 [2024-11-20 16:47:51.463831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:06.657 [2024-11-20 16:47:51.463840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:06.657 [2024-11-20 16:47:51.463849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:06.658 [2024-11-20 16:47:51.463859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:06.658 [2024-11-20 16:47:51.463869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:06.658 [2024-11-20 16:47:51.463878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:06.658 [2024-11-20 16:47:51.463888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:06.658 [2024-11-20 16:47:51.463897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:06.658 [2024-11-20 16:47:51.463907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:06.658 [2024-11-20 16:47:51.463917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:06.658 [2024-11-20 16:47:51.463927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:06.658 [2024-11-20 16:47:51.463937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:06.658 [2024-11-20 16:47:51.463946] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:06.658 [2024-11-20 16:47:51.463957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:06.658 [2024-11-20 16:47:51.463968] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:06.658 [2024-11-20 16:47:51.463977] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:06.658 [2024-11-20 16:47:51.463987] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:06.658 [2024-11-20 16:47:51.463996] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:06.658 [2024-11-20 16:47:51.464005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.658 [2024-11-20 16:47:51.464015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:06.658 [2024-11-20 16:47:51.464028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.818 ms 00:21:06.658 [2024-11-20 16:47:51.464037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.658 [2024-11-20 16:47:51.490474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.658 [2024-11-20 16:47:51.490519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:06.658 [2024-11-20 16:47:51.490530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.338 ms 00:21:06.658 [2024-11-20 16:47:51.490538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.658 [2024-11-20 16:47:51.490684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.658 [2024-11-20 16:47:51.490698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:06.658 [2024-11-20 16:47:51.490706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:21:06.658 [2024-11-20 16:47:51.490713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.658 [2024-11-20 16:47:51.535015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.658 [2024-11-20 16:47:51.535229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:06.658 [2024-11-20 16:47:51.535249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.278 ms 00:21:06.658 [2024-11-20 16:47:51.535262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.658 [2024-11-20 16:47:51.535396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.658 [2024-11-20 16:47:51.535409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:06.658 [2024-11-20 16:47:51.535418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:06.658 [2024-11-20 16:47:51.535425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.658 [2024-11-20 16:47:51.535762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.658 [2024-11-20 16:47:51.535777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:06.658 [2024-11-20 16:47:51.535786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:21:06.658 [2024-11-20 16:47:51.535800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.658 [2024-11-20 16:47:51.535930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.658 [2024-11-20 16:47:51.535939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:06.658 [2024-11-20 16:47:51.535947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:21:06.658 [2024-11-20 16:47:51.535955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.916 [2024-11-20 16:47:51.549680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.916 [2024-11-20 16:47:51.549817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:06.916 [2024-11-20 16:47:51.549833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.704 ms 00:21:06.916 [2024-11-20 16:47:51.549841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.916 [2024-11-20 16:47:51.562288] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:06.916 [2024-11-20 16:47:51.562321] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:06.916 [2024-11-20 16:47:51.562333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.916 [2024-11-20 16:47:51.562340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:06.916 [2024-11-20 16:47:51.562349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.378 ms 00:21:06.916 [2024-11-20 16:47:51.562356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.916 [2024-11-20 16:47:51.586407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.916 [2024-11-20 16:47:51.586458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:06.916 [2024-11-20 16:47:51.586470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.958 ms 00:21:06.916 [2024-11-20 16:47:51.586478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.916 [2024-11-20 16:47:51.598152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.916 [2024-11-20 16:47:51.598187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:06.916 [2024-11-20 16:47:51.598197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.586 ms 00:21:06.916 [2024-11-20 16:47:51.598204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.916 [2024-11-20 16:47:51.609630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.916 [2024-11-20 16:47:51.609661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:06.916 [2024-11-20 16:47:51.609672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.354 ms 00:21:06.916 [2024-11-20 16:47:51.609679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.916 [2024-11-20 16:47:51.610289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.916 [2024-11-20 16:47:51.610315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:06.916 [2024-11-20 16:47:51.610324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:21:06.916 [2024-11-20 16:47:51.610331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.916 [2024-11-20 16:47:51.666596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.916 [2024-11-20 16:47:51.666655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:06.916 [2024-11-20 16:47:51.666669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.240 ms 00:21:06.916 [2024-11-20 16:47:51.666677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.916 [2024-11-20 16:47:51.677245] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:06.916 [2024-11-20 16:47:51.692074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.916 [2024-11-20 16:47:51.692116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:06.916 [2024-11-20 16:47:51.692129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.277 ms 00:21:06.916 [2024-11-20 16:47:51.692138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.916 [2024-11-20 16:47:51.692234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.916 [2024-11-20 16:47:51.692245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:06.916 [2024-11-20 16:47:51.692253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:06.917 [2024-11-20 16:47:51.692260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.917 [2024-11-20 16:47:51.692309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.917 [2024-11-20 16:47:51.692318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:06.917 [2024-11-20 16:47:51.692326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:06.917 [2024-11-20 16:47:51.692333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.917 [2024-11-20 16:47:51.692357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.917 [2024-11-20 16:47:51.692367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:06.917 [2024-11-20 16:47:51.692374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:06.917 [2024-11-20 16:47:51.692401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.917 [2024-11-20 16:47:51.692444] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:06.917 [2024-11-20 16:47:51.692454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.917 [2024-11-20 16:47:51.692461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:06.917 [2024-11-20 16:47:51.692469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:06.917 [2024-11-20 16:47:51.692476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.917 [2024-11-20 16:47:51.715812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.917 [2024-11-20 16:47:51.715977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:06.917 [2024-11-20 16:47:51.715996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.315 ms 00:21:06.917 [2024-11-20 16:47:51.716005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.917 [2024-11-20 16:47:51.716106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.917 [2024-11-20 16:47:51.716118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:06.917 [2024-11-20 16:47:51.716127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:06.917 [2024-11-20 16:47:51.716134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.917 [2024-11-20 16:47:51.716939] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:06.917 [2024-11-20 16:47:51.720163] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 280.383 ms, result 0 00:21:06.917 [2024-11-20 16:47:51.720891] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:06.917 [2024-11-20 16:47:51.733884] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:07.177  [2024-11-20T16:47:52.063Z] Copying: 4096/4096 [kB] (average 38 MBps)[2024-11-20 16:47:51.839448] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:07.177 [2024-11-20 16:47:51.848805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.177 [2024-11-20 16:47:51.848848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:07.177 [2024-11-20 16:47:51.848863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:07.177 [2024-11-20 16:47:51.848879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.177 [2024-11-20 16:47:51.848902] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:07.177 [2024-11-20 16:47:51.851528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.177 [2024-11-20 16:47:51.851558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:07.177 [2024-11-20 16:47:51.851569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.613 ms 00:21:07.177 [2024-11-20 16:47:51.851578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.177 [2024-11-20 16:47:51.852971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.177 [2024-11-20 16:47:51.853004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:07.177 [2024-11-20 16:47:51.853014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.368 ms 00:21:07.177 [2024-11-20 16:47:51.853022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.177 [2024-11-20 16:47:51.856844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.177 [2024-11-20 16:47:51.856891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:07.177 [2024-11-20 16:47:51.856901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.806 ms 00:21:07.177 [2024-11-20 16:47:51.856908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.177 [2024-11-20 16:47:51.863835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.177 [2024-11-20 16:47:51.863865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:07.177 [2024-11-20 16:47:51.863875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.901 ms 00:21:07.177 [2024-11-20 16:47:51.863883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.177 [2024-11-20 16:47:51.889397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.177 [2024-11-20 16:47:51.889541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:07.177 [2024-11-20 16:47:51.889564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.453 ms 00:21:07.177 [2024-11-20 16:47:51.889576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.177 [2024-11-20 16:47:51.903991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.177 [2024-11-20 16:47:51.904038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:07.177 [2024-11-20 16:47:51.904054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.353 ms 00:21:07.177 [2024-11-20 16:47:51.904062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.177 [2024-11-20 16:47:51.904204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.177 [2024-11-20 16:47:51.904215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:07.177 [2024-11-20 16:47:51.904224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:21:07.177 [2024-11-20 16:47:51.904231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.177 [2024-11-20 16:47:51.927256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.177 [2024-11-20 16:47:51.927294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:07.177 [2024-11-20 16:47:51.927307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.001 ms 00:21:07.177 [2024-11-20 16:47:51.927315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.177 [2024-11-20 16:47:51.949914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.177 [2024-11-20 16:47:51.949954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:07.177 [2024-11-20 16:47:51.949966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.556 ms 00:21:07.177 [2024-11-20 16:47:51.949974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.177 [2024-11-20 16:47:51.972928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.177 [2024-11-20 16:47:51.973065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:07.177 [2024-11-20 16:47:51.973091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.913 ms 00:21:07.177 [2024-11-20 16:47:51.973103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.177 [2024-11-20 16:47:51.995937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.177 [2024-11-20 16:47:51.996069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:07.177 [2024-11-20 16:47:51.996090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.733 ms 00:21:07.177 [2024-11-20 16:47:51.996103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.177 [2024-11-20 16:47:51.996145] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:07.177 [2024-11-20 16:47:51.996166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:07.177 [2024-11-20 16:47:51.996177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:07.177 [2024-11-20 16:47:51.996188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:07.177 [2024-11-20 16:47:51.996200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:07.177 [2024-11-20 16:47:51.996213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:07.177 [2024-11-20 16:47:51.996225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:07.177 [2024-11-20 16:47:51.996236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:07.178 [2024-11-20 16:47:51.996770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:07.179 [2024-11-20 16:47:51.996777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:07.179 [2024-11-20 16:47:51.996784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:07.179 [2024-11-20 16:47:51.996791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:07.179 [2024-11-20 16:47:51.996798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:07.179 [2024-11-20 16:47:51.996805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:07.179 [2024-11-20 16:47:51.996812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:07.179 [2024-11-20 16:47:51.996819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:07.179 [2024-11-20 16:47:51.996827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:07.179 [2024-11-20 16:47:51.996834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:07.179 [2024-11-20 16:47:51.996841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:07.179 [2024-11-20 16:47:51.996848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:07.179 [2024-11-20 16:47:51.996855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:07.179 [2024-11-20 16:47:51.996862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:07.179 [2024-11-20 16:47:51.996869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:07.179 [2024-11-20 16:47:51.996877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:07.179 [2024-11-20 16:47:51.996884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:07.179 [2024-11-20 16:47:51.996891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:07.179 [2024-11-20 16:47:51.996898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:07.179 [2024-11-20 16:47:51.996905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:07.179 [2024-11-20 16:47:51.996912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:07.179 [2024-11-20 16:47:51.996920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:07.179 [2024-11-20 16:47:51.996935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:07.179 [2024-11-20 16:47:51.996943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:07.179 [2024-11-20 16:47:51.996950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:07.179 [2024-11-20 16:47:51.996957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:07.179 [2024-11-20 16:47:51.996964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:07.179 [2024-11-20 16:47:51.996980] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:07.179 [2024-11-20 16:47:51.996988] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b7e88b07-7c78-4b45-9425-1d3115e8b9f8 00:21:07.179 [2024-11-20 16:47:51.996997] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:07.179 [2024-11-20 16:47:51.997005] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:07.179 [2024-11-20 16:47:51.997012] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:07.179 [2024-11-20 16:47:51.997020] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:07.179 [2024-11-20 16:47:51.997027] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:07.179 [2024-11-20 16:47:51.997035] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:07.179 [2024-11-20 16:47:51.997042] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:07.179 [2024-11-20 16:47:51.997048] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:07.179 [2024-11-20 16:47:51.997054] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:07.179 [2024-11-20 16:47:51.997061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.179 [2024-11-20 16:47:51.997072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:07.179 [2024-11-20 16:47:51.997080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.917 ms 00:21:07.179 [2024-11-20 16:47:51.997087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.179 [2024-11-20 16:47:52.009927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.179 [2024-11-20 16:47:52.009961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:07.179 [2024-11-20 16:47:52.009973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.809 ms 00:21:07.179 [2024-11-20 16:47:52.009982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.179 [2024-11-20 16:47:52.010339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.179 [2024-11-20 16:47:52.010348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:07.179 [2024-11-20 16:47:52.010357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:21:07.179 [2024-11-20 16:47:52.010364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.179 [2024-11-20 16:47:52.045847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.179 [2024-11-20 16:47:52.045891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:07.179 [2024-11-20 16:47:52.045902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.179 [2024-11-20 16:47:52.045909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.179 [2024-11-20 16:47:52.045993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.179 [2024-11-20 16:47:52.046002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:07.179 [2024-11-20 16:47:52.046010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.179 [2024-11-20 16:47:52.046017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.179 [2024-11-20 16:47:52.046062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.179 [2024-11-20 16:47:52.046071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:07.179 [2024-11-20 16:47:52.046079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.179 [2024-11-20 16:47:52.046086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.179 [2024-11-20 16:47:52.046104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.179 [2024-11-20 16:47:52.046114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:07.179 [2024-11-20 16:47:52.046122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.179 [2024-11-20 16:47:52.046130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.437 [2024-11-20 16:47:52.126009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.437 [2024-11-20 16:47:52.126060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:07.437 [2024-11-20 16:47:52.126071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.437 [2024-11-20 16:47:52.126079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.437 [2024-11-20 16:47:52.191676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.437 [2024-11-20 16:47:52.191722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:07.437 [2024-11-20 16:47:52.191733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.437 [2024-11-20 16:47:52.191741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.437 [2024-11-20 16:47:52.191803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.437 [2024-11-20 16:47:52.191813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:07.437 [2024-11-20 16:47:52.191821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.437 [2024-11-20 16:47:52.191828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.437 [2024-11-20 16:47:52.191857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.437 [2024-11-20 16:47:52.191865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:07.438 [2024-11-20 16:47:52.191877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.438 [2024-11-20 16:47:52.191885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.438 [2024-11-20 16:47:52.191973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.438 [2024-11-20 16:47:52.191982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:07.438 [2024-11-20 16:47:52.191990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.438 [2024-11-20 16:47:52.191998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.438 [2024-11-20 16:47:52.192028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.438 [2024-11-20 16:47:52.192037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:07.438 [2024-11-20 16:47:52.192044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.438 [2024-11-20 16:47:52.192054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.438 [2024-11-20 16:47:52.192090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.438 [2024-11-20 16:47:52.192099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:07.438 [2024-11-20 16:47:52.192106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.438 [2024-11-20 16:47:52.192114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.438 [2024-11-20 16:47:52.192153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.438 [2024-11-20 16:47:52.192162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:07.438 [2024-11-20 16:47:52.192173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.438 [2024-11-20 16:47:52.192180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.438 [2024-11-20 16:47:52.192309] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 343.499 ms, result 0 00:21:08.004 00:21:08.004 00:21:08.004 16:47:52 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=76690 00:21:08.004 16:47:52 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 76690 00:21:08.004 16:47:52 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:21:08.004 16:47:52 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76690 ']' 00:21:08.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.004 16:47:52 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.004 16:47:52 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:08.004 16:47:52 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.004 16:47:52 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:08.004 16:47:52 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:08.262 [2024-11-20 16:47:52.961859] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:21:08.262 [2024-11-20 16:47:52.962176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76690 ] 00:21:08.262 [2024-11-20 16:47:53.121053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.520 [2024-11-20 16:47:53.224523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.087 16:47:53 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:09.087 16:47:53 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:21:09.087 16:47:53 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:21:09.370 [2024-11-20 16:47:54.043673] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:09.370 [2024-11-20 16:47:54.043932] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:09.370 [2024-11-20 16:47:54.213767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.370 [2024-11-20 16:47:54.213820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:09.370 [2024-11-20 16:47:54.213836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:09.370 [2024-11-20 16:47:54.213844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.370 [2024-11-20 16:47:54.216565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.370 [2024-11-20 16:47:54.216598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:09.370 [2024-11-20 16:47:54.216610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.701 ms 00:21:09.370 [2024-11-20 16:47:54.216618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.370 [2024-11-20 16:47:54.216743] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:09.370 [2024-11-20 16:47:54.217423] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:09.370 [2024-11-20 16:47:54.217450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.370 [2024-11-20 16:47:54.217459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:09.370 [2024-11-20 16:47:54.217470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.722 ms 00:21:09.370 [2024-11-20 16:47:54.217477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.370 [2024-11-20 16:47:54.218976] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:09.370 [2024-11-20 16:47:54.231483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.370 [2024-11-20 16:47:54.231524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:09.370 [2024-11-20 16:47:54.231542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.512 ms 00:21:09.370 [2024-11-20 16:47:54.231557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.370 [2024-11-20 16:47:54.231681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.370 [2024-11-20 16:47:54.231701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:09.370 [2024-11-20 16:47:54.231713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:21:09.370 [2024-11-20 16:47:54.231722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.370 [2024-11-20 16:47:54.236938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.370 [2024-11-20 16:47:54.236976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:09.370 [2024-11-20 16:47:54.236985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.168 ms 00:21:09.370 [2024-11-20 16:47:54.236995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.370 [2024-11-20 16:47:54.237093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.370 [2024-11-20 16:47:54.237105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:09.370 [2024-11-20 16:47:54.237114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:21:09.370 [2024-11-20 16:47:54.237123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.370 [2024-11-20 16:47:54.237156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.370 [2024-11-20 16:47:54.237166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:09.370 [2024-11-20 16:47:54.237173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:09.370 [2024-11-20 16:47:54.237182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.370 [2024-11-20 16:47:54.237207] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:09.370 [2024-11-20 16:47:54.240652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.370 [2024-11-20 16:47:54.240679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:09.370 [2024-11-20 16:47:54.240689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.450 ms 00:21:09.370 [2024-11-20 16:47:54.240697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.370 [2024-11-20 16:47:54.240733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.370 [2024-11-20 16:47:54.240741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:09.370 [2024-11-20 16:47:54.240751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:09.370 [2024-11-20 16:47:54.240761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.370 [2024-11-20 16:47:54.240781] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:09.370 [2024-11-20 16:47:54.240798] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:09.370 [2024-11-20 16:47:54.240838] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:09.370 [2024-11-20 16:47:54.240853] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:09.370 [2024-11-20 16:47:54.240958] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:09.370 [2024-11-20 16:47:54.240968] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:09.370 [2024-11-20 16:47:54.240982] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:09.370 [2024-11-20 16:47:54.241001] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:09.370 [2024-11-20 16:47:54.241012] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:09.370 [2024-11-20 16:47:54.241020] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:09.370 [2024-11-20 16:47:54.241030] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:09.370 [2024-11-20 16:47:54.241037] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:09.370 [2024-11-20 16:47:54.241047] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:09.370 [2024-11-20 16:47:54.241054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.370 [2024-11-20 16:47:54.241063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:09.370 [2024-11-20 16:47:54.241071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:21:09.370 [2024-11-20 16:47:54.241079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.370 [2024-11-20 16:47:54.241178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.370 [2024-11-20 16:47:54.241188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:09.370 [2024-11-20 16:47:54.241196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:21:09.370 [2024-11-20 16:47:54.241205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.370 [2024-11-20 16:47:54.241305] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:09.371 [2024-11-20 16:47:54.241316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:09.371 [2024-11-20 16:47:54.241324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:09.371 [2024-11-20 16:47:54.241334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.371 [2024-11-20 16:47:54.241341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:09.371 [2024-11-20 16:47:54.241350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:09.371 [2024-11-20 16:47:54.241356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:09.371 [2024-11-20 16:47:54.241369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:09.371 [2024-11-20 16:47:54.241387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:09.371 [2024-11-20 16:47:54.241396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:09.371 [2024-11-20 16:47:54.241403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:09.371 [2024-11-20 16:47:54.241411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:09.371 [2024-11-20 16:47:54.241417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:09.371 [2024-11-20 16:47:54.241426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:09.371 [2024-11-20 16:47:54.241432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:09.371 [2024-11-20 16:47:54.241440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.371 [2024-11-20 16:47:54.241447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:09.371 [2024-11-20 16:47:54.241455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:09.371 [2024-11-20 16:47:54.241462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.371 [2024-11-20 16:47:54.241470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:09.371 [2024-11-20 16:47:54.241482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:09.371 [2024-11-20 16:47:54.241492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:09.371 [2024-11-20 16:47:54.241499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:09.371 [2024-11-20 16:47:54.241509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:09.371 [2024-11-20 16:47:54.241516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:09.371 [2024-11-20 16:47:54.241523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:09.371 [2024-11-20 16:47:54.241530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:09.371 [2024-11-20 16:47:54.241538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:09.371 [2024-11-20 16:47:54.241545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:09.371 [2024-11-20 16:47:54.241553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:09.371 [2024-11-20 16:47:54.241559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:09.371 [2024-11-20 16:47:54.241568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:09.371 [2024-11-20 16:47:54.241574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:09.371 [2024-11-20 16:47:54.241584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:09.371 [2024-11-20 16:47:54.241591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:09.371 [2024-11-20 16:47:54.241599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:09.371 [2024-11-20 16:47:54.241605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:09.371 [2024-11-20 16:47:54.241613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:09.371 [2024-11-20 16:47:54.241620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:09.371 [2024-11-20 16:47:54.241630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.371 [2024-11-20 16:47:54.241637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:09.371 [2024-11-20 16:47:54.241645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:09.371 [2024-11-20 16:47:54.241652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.371 [2024-11-20 16:47:54.241661] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:09.371 [2024-11-20 16:47:54.241668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:09.371 [2024-11-20 16:47:54.241678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:09.371 [2024-11-20 16:47:54.241685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.371 [2024-11-20 16:47:54.241694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:09.371 [2024-11-20 16:47:54.241701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:09.371 [2024-11-20 16:47:54.241709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:09.371 [2024-11-20 16:47:54.241716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:09.371 [2024-11-20 16:47:54.241724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:09.371 [2024-11-20 16:47:54.241730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:09.371 [2024-11-20 16:47:54.241742] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:09.371 [2024-11-20 16:47:54.241752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:09.371 [2024-11-20 16:47:54.241763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:09.371 [2024-11-20 16:47:54.241771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:09.371 [2024-11-20 16:47:54.241780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:09.371 [2024-11-20 16:47:54.241788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:09.371 [2024-11-20 16:47:54.241796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:09.371 [2024-11-20 16:47:54.241804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:09.371 [2024-11-20 16:47:54.241812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:09.371 [2024-11-20 16:47:54.241819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:09.371 [2024-11-20 16:47:54.241827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:09.371 [2024-11-20 16:47:54.241834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:09.371 [2024-11-20 16:47:54.241843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:09.371 [2024-11-20 16:47:54.241850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:09.371 [2024-11-20 16:47:54.241859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:09.371 [2024-11-20 16:47:54.241866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:09.371 [2024-11-20 16:47:54.241874] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:09.371 [2024-11-20 16:47:54.241882] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:09.371 [2024-11-20 16:47:54.241893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:09.371 [2024-11-20 16:47:54.241901] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:09.371 [2024-11-20 16:47:54.241910] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:09.371 [2024-11-20 16:47:54.241917] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:09.371 [2024-11-20 16:47:54.241926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.371 [2024-11-20 16:47:54.241933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:09.372 [2024-11-20 16:47:54.241942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.687 ms 00:21:09.372 [2024-11-20 16:47:54.241949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.630 [2024-11-20 16:47:54.268508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.630 [2024-11-20 16:47:54.268548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:09.630 [2024-11-20 16:47:54.268562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.484 ms 00:21:09.630 [2024-11-20 16:47:54.268571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.630 [2024-11-20 16:47:54.268716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.630 [2024-11-20 16:47:54.268726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:09.630 [2024-11-20 16:47:54.268736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:09.630 [2024-11-20 16:47:54.268745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.630 [2024-11-20 16:47:54.299364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.630 [2024-11-20 16:47:54.299414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:09.630 [2024-11-20 16:47:54.299432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.594 ms 00:21:09.630 [2024-11-20 16:47:54.299440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.630 [2024-11-20 16:47:54.299519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.630 [2024-11-20 16:47:54.299529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:09.630 [2024-11-20 16:47:54.299539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:09.630 [2024-11-20 16:47:54.299547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.630 [2024-11-20 16:47:54.299868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.630 [2024-11-20 16:47:54.299887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:09.630 [2024-11-20 16:47:54.299898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:21:09.630 [2024-11-20 16:47:54.299907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.630 [2024-11-20 16:47:54.300032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.630 [2024-11-20 16:47:54.300040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:09.630 [2024-11-20 16:47:54.300050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:21:09.630 [2024-11-20 16:47:54.300057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.630 [2024-11-20 16:47:54.314368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.630 [2024-11-20 16:47:54.314409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:09.630 [2024-11-20 16:47:54.314422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.286 ms 00:21:09.630 [2024-11-20 16:47:54.314429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.630 [2024-11-20 16:47:54.326538] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:09.630 [2024-11-20 16:47:54.326571] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:09.630 [2024-11-20 16:47:54.326585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.630 [2024-11-20 16:47:54.326593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:09.630 [2024-11-20 16:47:54.326603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.030 ms 00:21:09.630 [2024-11-20 16:47:54.326610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.631 [2024-11-20 16:47:54.351011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.631 [2024-11-20 16:47:54.351062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:09.631 [2024-11-20 16:47:54.351075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.311 ms 00:21:09.631 [2024-11-20 16:47:54.351084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.631 [2024-11-20 16:47:54.362694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.631 [2024-11-20 16:47:54.362724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:09.631 [2024-11-20 16:47:54.362737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.531 ms 00:21:09.631 [2024-11-20 16:47:54.362744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.631 [2024-11-20 16:47:54.373585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.631 [2024-11-20 16:47:54.373735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:09.631 [2024-11-20 16:47:54.373755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.772 ms 00:21:09.631 [2024-11-20 16:47:54.373763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.631 [2024-11-20 16:47:54.374406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.631 [2024-11-20 16:47:54.374424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:09.631 [2024-11-20 16:47:54.374434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:21:09.631 [2024-11-20 16:47:54.374442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.631 [2024-11-20 16:47:54.440128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.631 [2024-11-20 16:47:54.440189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:09.631 [2024-11-20 16:47:54.440207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.659 ms 00:21:09.631 [2024-11-20 16:47:54.440216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.631 [2024-11-20 16:47:54.450917] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:09.631 [2024-11-20 16:47:54.466181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.631 [2024-11-20 16:47:54.466242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:09.631 [2024-11-20 16:47:54.466258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.833 ms 00:21:09.631 [2024-11-20 16:47:54.466268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.631 [2024-11-20 16:47:54.466364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.631 [2024-11-20 16:47:54.466395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:09.631 [2024-11-20 16:47:54.466405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:09.631 [2024-11-20 16:47:54.466414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.631 [2024-11-20 16:47:54.466465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.631 [2024-11-20 16:47:54.466476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:09.631 [2024-11-20 16:47:54.466484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:09.631 [2024-11-20 16:47:54.466493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.631 [2024-11-20 16:47:54.466520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.631 [2024-11-20 16:47:54.466530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:09.631 [2024-11-20 16:47:54.466537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:09.631 [2024-11-20 16:47:54.466549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.631 [2024-11-20 16:47:54.466580] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:09.631 [2024-11-20 16:47:54.466594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.631 [2024-11-20 16:47:54.466601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:09.631 [2024-11-20 16:47:54.466613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:09.631 [2024-11-20 16:47:54.466620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.631 [2024-11-20 16:47:54.490612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.631 [2024-11-20 16:47:54.490657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:09.631 [2024-11-20 16:47:54.490678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.965 ms 00:21:09.631 [2024-11-20 16:47:54.490686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.631 [2024-11-20 16:47:54.490787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.631 [2024-11-20 16:47:54.490798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:09.631 [2024-11-20 16:47:54.490808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:09.631 [2024-11-20 16:47:54.490817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.631 [2024-11-20 16:47:54.491602] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:09.631 [2024-11-20 16:47:54.494613] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 277.552 ms, result 0 00:21:09.631 [2024-11-20 16:47:54.495435] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:09.890 Some configs were skipped because the RPC state that can call them passed over. 00:21:09.890 16:47:54 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:21:09.890 [2024-11-20 16:47:54.733919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.890 [2024-11-20 16:47:54.734125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:09.890 [2024-11-20 16:47:54.734205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.619 ms 00:21:09.890 [2024-11-20 16:47:54.734265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.890 [2024-11-20 16:47:54.734322] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.025 ms, result 0 00:21:09.890 true 00:21:09.890 16:47:54 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:21:10.148 [2024-11-20 16:47:54.945436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.148 [2024-11-20 16:47:54.945591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:10.148 [2024-11-20 16:47:54.945705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.913 ms 00:21:10.148 [2024-11-20 16:47:54.945734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.149 [2024-11-20 16:47:54.945819] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.294 ms, result 0 00:21:10.149 true 00:21:10.149 16:47:54 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 76690 00:21:10.149 16:47:54 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76690 ']' 00:21:10.149 16:47:54 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76690 00:21:10.149 16:47:54 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:21:10.149 16:47:54 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:10.149 16:47:54 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76690 00:21:10.149 killing process with pid 76690 00:21:10.149 16:47:54 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:10.149 16:47:54 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:10.149 16:47:54 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76690' 00:21:10.149 16:47:54 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76690 00:21:10.149 16:47:54 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76690 00:21:11.084 [2024-11-20 16:47:55.702724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.084 [2024-11-20 16:47:55.702778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:11.084 [2024-11-20 16:47:55.702791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:11.084 [2024-11-20 16:47:55.702800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.084 [2024-11-20 16:47:55.702821] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:11.084 [2024-11-20 16:47:55.705371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.084 [2024-11-20 16:47:55.705433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:11.084 [2024-11-20 16:47:55.705445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.532 ms 00:21:11.084 [2024-11-20 16:47:55.705453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.084 [2024-11-20 16:47:55.705738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.084 [2024-11-20 16:47:55.705786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:11.084 [2024-11-20 16:47:55.705799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:21:11.084 [2024-11-20 16:47:55.705807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.084 [2024-11-20 16:47:55.709791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.084 [2024-11-20 16:47:55.709883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:11.084 [2024-11-20 16:47:55.709947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.962 ms 00:21:11.084 [2024-11-20 16:47:55.710001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.084 [2024-11-20 16:47:55.717190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.084 [2024-11-20 16:47:55.717306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:11.084 [2024-11-20 16:47:55.717366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.138 ms 00:21:11.084 [2024-11-20 16:47:55.717467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.084 [2024-11-20 16:47:55.727147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.084 [2024-11-20 16:47:55.727249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:11.084 [2024-11-20 16:47:55.727309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.604 ms 00:21:11.084 [2024-11-20 16:47:55.727372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.084 [2024-11-20 16:47:55.734387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.084 [2024-11-20 16:47:55.734480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:11.084 [2024-11-20 16:47:55.734537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.930 ms 00:21:11.084 [2024-11-20 16:47:55.734559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.084 [2024-11-20 16:47:55.734772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.084 [2024-11-20 16:47:55.734851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:11.084 [2024-11-20 16:47:55.734923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:21:11.084 [2024-11-20 16:47:55.734948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.084 [2024-11-20 16:47:55.745351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.084 [2024-11-20 16:47:55.745458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:11.084 [2024-11-20 16:47:55.745511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.367 ms 00:21:11.084 [2024-11-20 16:47:55.745533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.084 [2024-11-20 16:47:55.755098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.084 [2024-11-20 16:47:55.755193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:11.084 [2024-11-20 16:47:55.755267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.521 ms 00:21:11.084 [2024-11-20 16:47:55.755290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.084 [2024-11-20 16:47:55.764365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.084 [2024-11-20 16:47:55.764470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:11.084 [2024-11-20 16:47:55.764523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.026 ms 00:21:11.084 [2024-11-20 16:47:55.764544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.084 [2024-11-20 16:47:55.773854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.084 [2024-11-20 16:47:55.773953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:11.084 [2024-11-20 16:47:55.774005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.227 ms 00:21:11.084 [2024-11-20 16:47:55.774026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.084 [2024-11-20 16:47:55.774067] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:11.084 [2024-11-20 16:47:55.774182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:11.084 [2024-11-20 16:47:55.774220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:11.084 [2024-11-20 16:47:55.774285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:11.084 [2024-11-20 16:47:55.774317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:11.084 [2024-11-20 16:47:55.774403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:11.084 [2024-11-20 16:47:55.774473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:11.084 [2024-11-20 16:47:55.774504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:11.084 [2024-11-20 16:47:55.774623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:11.084 [2024-11-20 16:47:55.774654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:11.084 [2024-11-20 16:47:55.774729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:11.084 [2024-11-20 16:47:55.774775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:11.084 [2024-11-20 16:47:55.774884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:11.084 [2024-11-20 16:47:55.774973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.775020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.775158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.775191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.775251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.775289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.775319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.775388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.775420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.775453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.775513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.775553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.775582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.775653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.775683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.775713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.775768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.775798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.775826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.775856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.775938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.775970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.776052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.776086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.776114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.776146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.776210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.776244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.776276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.776337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.776369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.776414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.776479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.776540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.776572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.776634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.776665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.776696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.776754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.776791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.776820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.776880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.776925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.776972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:11.085 [2024-11-20 16:47:55.777677] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:11.085 [2024-11-20 16:47:55.777690] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b7e88b07-7c78-4b45-9425-1d3115e8b9f8 00:21:11.085 [2024-11-20 16:47:55.777703] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:11.086 [2024-11-20 16:47:55.777714] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:11.086 [2024-11-20 16:47:55.777721] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:11.086 [2024-11-20 16:47:55.777730] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:11.086 [2024-11-20 16:47:55.777737] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:11.086 [2024-11-20 16:47:55.777751] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:11.086 [2024-11-20 16:47:55.777758] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:11.086 [2024-11-20 16:47:55.777767] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:11.086 [2024-11-20 16:47:55.777773] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:11.086 [2024-11-20 16:47:55.777783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.086 [2024-11-20 16:47:55.777790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:11.086 [2024-11-20 16:47:55.777800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.717 ms 00:21:11.086 [2024-11-20 16:47:55.777807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.086 [2024-11-20 16:47:55.790750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.086 [2024-11-20 16:47:55.790846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:11.086 [2024-11-20 16:47:55.790899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.894 ms 00:21:11.086 [2024-11-20 16:47:55.790921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.086 [2024-11-20 16:47:55.791293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.086 [2024-11-20 16:47:55.791366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:11.086 [2024-11-20 16:47:55.791453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:21:11.086 [2024-11-20 16:47:55.791482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.086 [2024-11-20 16:47:55.835752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.086 [2024-11-20 16:47:55.835892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:11.086 [2024-11-20 16:47:55.835952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.086 [2024-11-20 16:47:55.835975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.086 [2024-11-20 16:47:55.837293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.086 [2024-11-20 16:47:55.837398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:11.086 [2024-11-20 16:47:55.837449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.086 [2024-11-20 16:47:55.837474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.086 [2024-11-20 16:47:55.837583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.086 [2024-11-20 16:47:55.837642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:11.086 [2024-11-20 16:47:55.837706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.086 [2024-11-20 16:47:55.837727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.086 [2024-11-20 16:47:55.837789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.086 [2024-11-20 16:47:55.837812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:11.086 [2024-11-20 16:47:55.837833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.086 [2024-11-20 16:47:55.837892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.086 [2024-11-20 16:47:55.919132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.086 [2024-11-20 16:47:55.919287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:11.086 [2024-11-20 16:47:55.919342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.086 [2024-11-20 16:47:55.919356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.344 [2024-11-20 16:47:55.985839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.344 [2024-11-20 16:47:55.985998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:11.344 [2024-11-20 16:47:55.986016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.344 [2024-11-20 16:47:55.986027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.344 [2024-11-20 16:47:55.986112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.344 [2024-11-20 16:47:55.986121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:11.344 [2024-11-20 16:47:55.986134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.344 [2024-11-20 16:47:55.986141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.344 [2024-11-20 16:47:55.986169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.344 [2024-11-20 16:47:55.986176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:11.344 [2024-11-20 16:47:55.986186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.344 [2024-11-20 16:47:55.986193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.344 [2024-11-20 16:47:55.986286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.344 [2024-11-20 16:47:55.986296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:11.344 [2024-11-20 16:47:55.986305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.344 [2024-11-20 16:47:55.986312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.344 [2024-11-20 16:47:55.986344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.344 [2024-11-20 16:47:55.986352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:11.344 [2024-11-20 16:47:55.986361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.344 [2024-11-20 16:47:55.986368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.344 [2024-11-20 16:47:55.986425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.344 [2024-11-20 16:47:55.986437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:11.344 [2024-11-20 16:47:55.986447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.344 [2024-11-20 16:47:55.986458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.344 [2024-11-20 16:47:55.986509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.344 [2024-11-20 16:47:55.986520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:11.344 [2024-11-20 16:47:55.986532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.344 [2024-11-20 16:47:55.986542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.344 [2024-11-20 16:47:55.986699] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 283.954 ms, result 0 00:21:11.909 16:47:56 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:11.909 [2024-11-20 16:47:56.715049] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:21:11.910 [2024-11-20 16:47:56.715283] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76748 ] 00:21:12.167 [2024-11-20 16:47:56.873074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.167 [2024-11-20 16:47:56.972879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.425 [2024-11-20 16:47:57.224066] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:12.425 [2024-11-20 16:47:57.224296] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:12.684 [2024-11-20 16:47:57.378744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.684 [2024-11-20 16:47:57.378803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:12.684 [2024-11-20 16:47:57.378816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:12.684 [2024-11-20 16:47:57.378824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.684 [2024-11-20 16:47:57.381462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.684 [2024-11-20 16:47:57.381614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:12.684 [2024-11-20 16:47:57.381631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.620 ms 00:21:12.684 [2024-11-20 16:47:57.381639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.684 [2024-11-20 16:47:57.381706] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:12.684 [2024-11-20 16:47:57.382374] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:12.684 [2024-11-20 16:47:57.382413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.684 [2024-11-20 16:47:57.382421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:12.684 [2024-11-20 16:47:57.382429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.713 ms 00:21:12.684 [2024-11-20 16:47:57.382437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.684 [2024-11-20 16:47:57.383557] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:12.684 [2024-11-20 16:47:57.395904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.684 [2024-11-20 16:47:57.395940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:12.684 [2024-11-20 16:47:57.395952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.349 ms 00:21:12.684 [2024-11-20 16:47:57.395960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.684 [2024-11-20 16:47:57.396042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.684 [2024-11-20 16:47:57.396053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:12.684 [2024-11-20 16:47:57.396062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:21:12.684 [2024-11-20 16:47:57.396069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.684 [2024-11-20 16:47:57.400899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.684 [2024-11-20 16:47:57.401029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:12.684 [2024-11-20 16:47:57.401044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.788 ms 00:21:12.684 [2024-11-20 16:47:57.401052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.684 [2024-11-20 16:47:57.401143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.684 [2024-11-20 16:47:57.401153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:12.684 [2024-11-20 16:47:57.401161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:21:12.684 [2024-11-20 16:47:57.401168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.684 [2024-11-20 16:47:57.401193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.684 [2024-11-20 16:47:57.401204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:12.684 [2024-11-20 16:47:57.401211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:12.684 [2024-11-20 16:47:57.401219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.684 [2024-11-20 16:47:57.401239] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:12.684 [2024-11-20 16:47:57.404623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.684 [2024-11-20 16:47:57.404650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:12.685 [2024-11-20 16:47:57.404659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.389 ms 00:21:12.685 [2024-11-20 16:47:57.404666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.685 [2024-11-20 16:47:57.404700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.685 [2024-11-20 16:47:57.404709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:12.685 [2024-11-20 16:47:57.404717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:12.685 [2024-11-20 16:47:57.404724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.685 [2024-11-20 16:47:57.404741] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:12.685 [2024-11-20 16:47:57.404760] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:12.685 [2024-11-20 16:47:57.404794] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:12.685 [2024-11-20 16:47:57.404809] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:12.685 [2024-11-20 16:47:57.404911] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:12.685 [2024-11-20 16:47:57.404922] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:12.685 [2024-11-20 16:47:57.404932] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:12.685 [2024-11-20 16:47:57.404941] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:12.685 [2024-11-20 16:47:57.404953] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:12.685 [2024-11-20 16:47:57.404960] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:12.685 [2024-11-20 16:47:57.404968] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:12.685 [2024-11-20 16:47:57.404974] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:12.685 [2024-11-20 16:47:57.404982] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:12.685 [2024-11-20 16:47:57.404988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.685 [2024-11-20 16:47:57.404996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:12.685 [2024-11-20 16:47:57.405003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:21:12.685 [2024-11-20 16:47:57.405010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.685 [2024-11-20 16:47:57.405097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.685 [2024-11-20 16:47:57.405105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:12.685 [2024-11-20 16:47:57.405114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:21:12.685 [2024-11-20 16:47:57.405120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.685 [2024-11-20 16:47:57.405233] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:12.685 [2024-11-20 16:47:57.405244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:12.685 [2024-11-20 16:47:57.405252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:12.685 [2024-11-20 16:47:57.405259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.685 [2024-11-20 16:47:57.405267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:12.685 [2024-11-20 16:47:57.405274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:12.685 [2024-11-20 16:47:57.405281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:12.685 [2024-11-20 16:47:57.405288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:12.685 [2024-11-20 16:47:57.405295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:12.685 [2024-11-20 16:47:57.405302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:12.685 [2024-11-20 16:47:57.405308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:12.685 [2024-11-20 16:47:57.405314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:12.685 [2024-11-20 16:47:57.405321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:12.685 [2024-11-20 16:47:57.405333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:12.685 [2024-11-20 16:47:57.405340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:12.685 [2024-11-20 16:47:57.405346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.685 [2024-11-20 16:47:57.405353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:12.685 [2024-11-20 16:47:57.405360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:12.685 [2024-11-20 16:47:57.405366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.685 [2024-11-20 16:47:57.405373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:12.685 [2024-11-20 16:47:57.405403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:12.685 [2024-11-20 16:47:57.405410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:12.685 [2024-11-20 16:47:57.405417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:12.685 [2024-11-20 16:47:57.405424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:12.685 [2024-11-20 16:47:57.405430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:12.685 [2024-11-20 16:47:57.405437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:12.685 [2024-11-20 16:47:57.405443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:12.685 [2024-11-20 16:47:57.405450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:12.685 [2024-11-20 16:47:57.405456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:12.685 [2024-11-20 16:47:57.405463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:12.685 [2024-11-20 16:47:57.405470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:12.685 [2024-11-20 16:47:57.405476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:12.685 [2024-11-20 16:47:57.405483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:12.685 [2024-11-20 16:47:57.405496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:12.685 [2024-11-20 16:47:57.405503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:12.685 [2024-11-20 16:47:57.405510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:12.685 [2024-11-20 16:47:57.405516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:12.685 [2024-11-20 16:47:57.405522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:12.685 [2024-11-20 16:47:57.405529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:12.685 [2024-11-20 16:47:57.405535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.685 [2024-11-20 16:47:57.405541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:12.685 [2024-11-20 16:47:57.405548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:12.685 [2024-11-20 16:47:57.405554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.685 [2024-11-20 16:47:57.405560] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:12.685 [2024-11-20 16:47:57.405566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:12.685 [2024-11-20 16:47:57.405573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:12.685 [2024-11-20 16:47:57.405582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.685 [2024-11-20 16:47:57.405590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:12.685 [2024-11-20 16:47:57.405596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:12.685 [2024-11-20 16:47:57.405603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:12.685 [2024-11-20 16:47:57.405609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:12.685 [2024-11-20 16:47:57.405615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:12.685 [2024-11-20 16:47:57.405622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:12.685 [2024-11-20 16:47:57.405630] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:12.685 [2024-11-20 16:47:57.405638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:12.685 [2024-11-20 16:47:57.405646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:12.685 [2024-11-20 16:47:57.405654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:12.685 [2024-11-20 16:47:57.405662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:12.685 [2024-11-20 16:47:57.405668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:12.685 [2024-11-20 16:47:57.405675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:12.685 [2024-11-20 16:47:57.405682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:12.685 [2024-11-20 16:47:57.405689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:12.685 [2024-11-20 16:47:57.405696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:12.685 [2024-11-20 16:47:57.405703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:12.685 [2024-11-20 16:47:57.405709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:12.686 [2024-11-20 16:47:57.405716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:12.686 [2024-11-20 16:47:57.405724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:12.686 [2024-11-20 16:47:57.405731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:12.686 [2024-11-20 16:47:57.405738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:12.686 [2024-11-20 16:47:57.405744] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:12.686 [2024-11-20 16:47:57.405752] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:12.686 [2024-11-20 16:47:57.405759] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:12.686 [2024-11-20 16:47:57.405766] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:12.686 [2024-11-20 16:47:57.405773] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:12.686 [2024-11-20 16:47:57.405780] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:12.686 [2024-11-20 16:47:57.405787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.686 [2024-11-20 16:47:57.405794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:12.686 [2024-11-20 16:47:57.405804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:21:12.686 [2024-11-20 16:47:57.405811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.686 [2024-11-20 16:47:57.431624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.686 [2024-11-20 16:47:57.431799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:12.686 [2024-11-20 16:47:57.431815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.764 ms 00:21:12.686 [2024-11-20 16:47:57.431824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.686 [2024-11-20 16:47:57.431955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.686 [2024-11-20 16:47:57.431969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:12.686 [2024-11-20 16:47:57.431978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:21:12.686 [2024-11-20 16:47:57.431985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.686 [2024-11-20 16:47:57.472163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.686 [2024-11-20 16:47:57.472211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:12.686 [2024-11-20 16:47:57.472224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.155 ms 00:21:12.686 [2024-11-20 16:47:57.472235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.686 [2024-11-20 16:47:57.472345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.686 [2024-11-20 16:47:57.472357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:12.686 [2024-11-20 16:47:57.472366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:12.686 [2024-11-20 16:47:57.472374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.686 [2024-11-20 16:47:57.472715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.686 [2024-11-20 16:47:57.472743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:12.686 [2024-11-20 16:47:57.472752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:21:12.686 [2024-11-20 16:47:57.472765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.686 [2024-11-20 16:47:57.472892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.686 [2024-11-20 16:47:57.472905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:12.686 [2024-11-20 16:47:57.472913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:21:12.686 [2024-11-20 16:47:57.472920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.686 [2024-11-20 16:47:57.486018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.686 [2024-11-20 16:47:57.486177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:12.686 [2024-11-20 16:47:57.486193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.078 ms 00:21:12.686 [2024-11-20 16:47:57.486200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.686 [2024-11-20 16:47:57.498466] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:12.686 [2024-11-20 16:47:57.498499] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:12.686 [2024-11-20 16:47:57.498510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.686 [2024-11-20 16:47:57.498518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:12.686 [2024-11-20 16:47:57.498527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.200 ms 00:21:12.686 [2024-11-20 16:47:57.498535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.686 [2024-11-20 16:47:57.522489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.686 [2024-11-20 16:47:57.522530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:12.686 [2024-11-20 16:47:57.522541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.876 ms 00:21:12.686 [2024-11-20 16:47:57.522549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.686 [2024-11-20 16:47:57.533900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.686 [2024-11-20 16:47:57.533930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:12.686 [2024-11-20 16:47:57.533940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.278 ms 00:21:12.686 [2024-11-20 16:47:57.533947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.686 [2024-11-20 16:47:57.545008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.686 [2024-11-20 16:47:57.545131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:12.686 [2024-11-20 16:47:57.545148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.996 ms 00:21:12.686 [2024-11-20 16:47:57.545156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.686 [2024-11-20 16:47:57.545780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.686 [2024-11-20 16:47:57.545801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:12.686 [2024-11-20 16:47:57.545810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:21:12.686 [2024-11-20 16:47:57.545818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.944 [2024-11-20 16:47:57.600475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.944 [2024-11-20 16:47:57.600525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:12.944 [2024-11-20 16:47:57.600539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.632 ms 00:21:12.944 [2024-11-20 16:47:57.600547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.944 [2024-11-20 16:47:57.611355] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:12.944 [2024-11-20 16:47:57.625396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.944 [2024-11-20 16:47:57.625434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:12.945 [2024-11-20 16:47:57.625447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.735 ms 00:21:12.945 [2024-11-20 16:47:57.625456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.945 [2024-11-20 16:47:57.625552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.945 [2024-11-20 16:47:57.625563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:12.945 [2024-11-20 16:47:57.625571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:12.945 [2024-11-20 16:47:57.625579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.945 [2024-11-20 16:47:57.625626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.945 [2024-11-20 16:47:57.625635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:12.945 [2024-11-20 16:47:57.625643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:12.945 [2024-11-20 16:47:57.625650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.945 [2024-11-20 16:47:57.625674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.945 [2024-11-20 16:47:57.625684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:12.945 [2024-11-20 16:47:57.625692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:12.945 [2024-11-20 16:47:57.625699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.945 [2024-11-20 16:47:57.625730] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:12.945 [2024-11-20 16:47:57.625739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.945 [2024-11-20 16:47:57.625747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:12.945 [2024-11-20 16:47:57.625755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:12.945 [2024-11-20 16:47:57.625762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.945 [2024-11-20 16:47:57.649248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.945 [2024-11-20 16:47:57.649286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:12.945 [2024-11-20 16:47:57.649298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.467 ms 00:21:12.945 [2024-11-20 16:47:57.649307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.945 [2024-11-20 16:47:57.649414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.945 [2024-11-20 16:47:57.649426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:12.945 [2024-11-20 16:47:57.649435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:21:12.945 [2024-11-20 16:47:57.649442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.945 [2024-11-20 16:47:57.650652] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:12.945 [2024-11-20 16:47:57.653916] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 271.616 ms, result 0 00:21:12.945 [2024-11-20 16:47:57.654635] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:12.945 [2024-11-20 16:47:57.667531] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:13.877  [2024-11-20T16:48:00.136Z] Copying: 45/256 [MB] (45 MBps) [2024-11-20T16:48:01.071Z] Copying: 88/256 [MB] (42 MBps) [2024-11-20T16:48:02.004Z] Copying: 131/256 [MB] (43 MBps) [2024-11-20T16:48:02.937Z] Copying: 174/256 [MB] (43 MBps) [2024-11-20T16:48:03.870Z] Copying: 215/256 [MB] (41 MBps) [2024-11-20T16:48:04.128Z] Copying: 256/256 [MB] (average 42 MBps)[2024-11-20 16:48:04.028849] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:19.242 [2024-11-20 16:48:04.038283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.242 [2024-11-20 16:48:04.038333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:19.242 [2024-11-20 16:48:04.038346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:19.242 [2024-11-20 16:48:04.038363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.242 [2024-11-20 16:48:04.038397] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:19.242 [2024-11-20 16:48:04.041046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.242 [2024-11-20 16:48:04.041088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:19.242 [2024-11-20 16:48:04.041101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.634 ms 00:21:19.242 [2024-11-20 16:48:04.041109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.242 [2024-11-20 16:48:04.042201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.242 [2024-11-20 16:48:04.042229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:19.242 [2024-11-20 16:48:04.042240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.037 ms 00:21:19.242 [2024-11-20 16:48:04.042248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.242 [2024-11-20 16:48:04.046110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.242 [2024-11-20 16:48:04.046141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:19.242 [2024-11-20 16:48:04.046152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.845 ms 00:21:19.242 [2024-11-20 16:48:04.046161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.242 [2024-11-20 16:48:04.053210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.242 [2024-11-20 16:48:04.053252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:19.242 [2024-11-20 16:48:04.053263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.027 ms 00:21:19.242 [2024-11-20 16:48:04.053272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.242 [2024-11-20 16:48:04.079620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.242 [2024-11-20 16:48:04.079822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:19.242 [2024-11-20 16:48:04.079840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.276 ms 00:21:19.242 [2024-11-20 16:48:04.079849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.242 [2024-11-20 16:48:04.093664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.242 [2024-11-20 16:48:04.093720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:19.242 [2024-11-20 16:48:04.093733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.774 ms 00:21:19.242 [2024-11-20 16:48:04.093745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.242 [2024-11-20 16:48:04.093902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.242 [2024-11-20 16:48:04.093912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:19.242 [2024-11-20 16:48:04.093921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:21:19.242 [2024-11-20 16:48:04.093929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.242 [2024-11-20 16:48:04.117828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.242 [2024-11-20 16:48:04.117883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:19.242 [2024-11-20 16:48:04.117898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.873 ms 00:21:19.242 [2024-11-20 16:48:04.117905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.501 [2024-11-20 16:48:04.141098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.501 [2024-11-20 16:48:04.141148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:19.501 [2024-11-20 16:48:04.141160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.158 ms 00:21:19.501 [2024-11-20 16:48:04.141168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.501 [2024-11-20 16:48:04.164048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.501 [2024-11-20 16:48:04.164110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:19.501 [2024-11-20 16:48:04.164123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.844 ms 00:21:19.501 [2024-11-20 16:48:04.164131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.501 [2024-11-20 16:48:04.187879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.501 [2024-11-20 16:48:04.187933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:19.501 [2024-11-20 16:48:04.187946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.679 ms 00:21:19.501 [2024-11-20 16:48:04.187954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.501 [2024-11-20 16:48:04.187986] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:19.501 [2024-11-20 16:48:04.188000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:19.501 [2024-11-20 16:48:04.188010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:19.501 [2024-11-20 16:48:04.188018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:19.501 [2024-11-20 16:48:04.188025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:19.501 [2024-11-20 16:48:04.188033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:19.501 [2024-11-20 16:48:04.188041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:19.501 [2024-11-20 16:48:04.188048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:19.501 [2024-11-20 16:48:04.188056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:19.501 [2024-11-20 16:48:04.188063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:19.501 [2024-11-20 16:48:04.188071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:19.501 [2024-11-20 16:48:04.188078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:19.501 [2024-11-20 16:48:04.188086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:19.501 [2024-11-20 16:48:04.188094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:19.501 [2024-11-20 16:48:04.188101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:19.501 [2024-11-20 16:48:04.188108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:19.501 [2024-11-20 16:48:04.188115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:19.501 [2024-11-20 16:48:04.188122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:19.501 [2024-11-20 16:48:04.188130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:19.501 [2024-11-20 16:48:04.188137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:19.501 [2024-11-20 16:48:04.188144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:19.501 [2024-11-20 16:48:04.188151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:19.502 [2024-11-20 16:48:04.188798] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:19.502 [2024-11-20 16:48:04.188806] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b7e88b07-7c78-4b45-9425-1d3115e8b9f8 00:21:19.502 [2024-11-20 16:48:04.188813] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:19.502 [2024-11-20 16:48:04.188820] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:19.502 [2024-11-20 16:48:04.188827] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:19.502 [2024-11-20 16:48:04.188835] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:19.502 [2024-11-20 16:48:04.188841] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:19.502 [2024-11-20 16:48:04.188848] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:19.502 [2024-11-20 16:48:04.188855] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:19.502 [2024-11-20 16:48:04.188861] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:19.502 [2024-11-20 16:48:04.188867] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:19.502 [2024-11-20 16:48:04.188874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.502 [2024-11-20 16:48:04.188885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:19.503 [2024-11-20 16:48:04.188892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.889 ms 00:21:19.503 [2024-11-20 16:48:04.188899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.503 [2024-11-20 16:48:04.201254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.503 [2024-11-20 16:48:04.201295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:19.503 [2024-11-20 16:48:04.201305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.336 ms 00:21:19.503 [2024-11-20 16:48:04.201313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.503 [2024-11-20 16:48:04.201706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.503 [2024-11-20 16:48:04.201721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:19.503 [2024-11-20 16:48:04.201730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:21:19.503 [2024-11-20 16:48:04.201737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.503 [2024-11-20 16:48:04.236637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.503 [2024-11-20 16:48:04.236690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:19.503 [2024-11-20 16:48:04.236702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.503 [2024-11-20 16:48:04.236711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.503 [2024-11-20 16:48:04.236827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.503 [2024-11-20 16:48:04.236838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:19.503 [2024-11-20 16:48:04.236847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.503 [2024-11-20 16:48:04.236855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.503 [2024-11-20 16:48:04.236901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.503 [2024-11-20 16:48:04.236911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:19.503 [2024-11-20 16:48:04.236919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.503 [2024-11-20 16:48:04.236927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.503 [2024-11-20 16:48:04.236945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.503 [2024-11-20 16:48:04.236957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:19.503 [2024-11-20 16:48:04.236966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.503 [2024-11-20 16:48:04.236974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.503 [2024-11-20 16:48:04.313864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.503 [2024-11-20 16:48:04.314050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:19.503 [2024-11-20 16:48:04.314068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.503 [2024-11-20 16:48:04.314077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.503 [2024-11-20 16:48:04.377263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.503 [2024-11-20 16:48:04.377473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:19.503 [2024-11-20 16:48:04.377490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.503 [2024-11-20 16:48:04.377498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.503 [2024-11-20 16:48:04.377567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.503 [2024-11-20 16:48:04.377576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:19.503 [2024-11-20 16:48:04.377584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.503 [2024-11-20 16:48:04.377591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.503 [2024-11-20 16:48:04.377619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.503 [2024-11-20 16:48:04.377626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:19.503 [2024-11-20 16:48:04.377638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.503 [2024-11-20 16:48:04.377645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.503 [2024-11-20 16:48:04.377738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.503 [2024-11-20 16:48:04.377747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:19.503 [2024-11-20 16:48:04.377756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.503 [2024-11-20 16:48:04.377763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.503 [2024-11-20 16:48:04.377793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.503 [2024-11-20 16:48:04.377801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:19.503 [2024-11-20 16:48:04.377809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.503 [2024-11-20 16:48:04.377818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.503 [2024-11-20 16:48:04.377855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.503 [2024-11-20 16:48:04.377863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:19.503 [2024-11-20 16:48:04.377871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.503 [2024-11-20 16:48:04.377878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.503 [2024-11-20 16:48:04.377920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.503 [2024-11-20 16:48:04.377930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:19.503 [2024-11-20 16:48:04.377941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.503 [2024-11-20 16:48:04.377948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.503 [2024-11-20 16:48:04.378073] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 339.775 ms, result 0 00:21:20.437 00:21:20.437 00:21:20.437 16:48:05 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:21.003 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:21:21.003 16:48:05 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:21:21.003 16:48:05 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:21:21.003 16:48:05 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:21.003 16:48:05 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:21.003 16:48:05 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:21:21.003 16:48:05 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:21.003 16:48:05 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 76690 00:21:21.003 16:48:05 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76690 ']' 00:21:21.003 16:48:05 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76690 00:21:21.003 Process with pid 76690 is not found 00:21:21.003 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76690) - No such process 00:21:21.003 16:48:05 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 76690 is not found' 00:21:21.003 ************************************ 00:21:21.003 END TEST ftl_trim 00:21:21.003 ************************************ 00:21:21.003 00:21:21.003 real 0m51.201s 00:21:21.003 user 1m7.802s 00:21:21.003 sys 0m15.611s 00:21:21.003 16:48:05 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:21.003 16:48:05 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:21.003 16:48:05 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:21.003 16:48:05 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:21.003 16:48:05 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:21.003 16:48:05 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:21.003 ************************************ 00:21:21.003 START TEST ftl_restore 00:21:21.003 ************************************ 00:21:21.003 16:48:05 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:21.003 * Looking for test storage... 00:21:21.003 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:21.003 16:48:05 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:21.003 16:48:05 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:21:21.003 16:48:05 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:21.003 16:48:05 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:21.003 16:48:05 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:21.003 16:48:05 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:21.003 16:48:05 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:21.003 16:48:05 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:21:21.003 16:48:05 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:21:21.003 16:48:05 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:21:21.003 16:48:05 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:21:21.003 16:48:05 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:21:21.003 16:48:05 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:21:21.003 16:48:05 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:21:21.003 16:48:05 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:21.003 16:48:05 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:21:21.003 16:48:05 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:21:21.003 16:48:05 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:21.003 16:48:05 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:21.003 16:48:05 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:21:21.003 16:48:05 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:21:21.003 16:48:05 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:21.003 16:48:05 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:21:21.003 16:48:05 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:21:21.003 16:48:05 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:21:21.003 16:48:05 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:21:21.003 16:48:05 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:21.003 16:48:05 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:21:21.003 16:48:05 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:21:21.003 16:48:05 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:21.003 16:48:05 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:21.003 16:48:05 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:21:21.003 16:48:05 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:21.003 16:48:05 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:21.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.003 --rc genhtml_branch_coverage=1 00:21:21.003 --rc genhtml_function_coverage=1 00:21:21.003 --rc genhtml_legend=1 00:21:21.003 --rc geninfo_all_blocks=1 00:21:21.003 --rc geninfo_unexecuted_blocks=1 00:21:21.003 00:21:21.003 ' 00:21:21.003 16:48:05 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:21.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.003 --rc genhtml_branch_coverage=1 00:21:21.003 --rc genhtml_function_coverage=1 00:21:21.003 --rc genhtml_legend=1 00:21:21.003 --rc geninfo_all_blocks=1 00:21:21.003 --rc geninfo_unexecuted_blocks=1 00:21:21.003 00:21:21.003 ' 00:21:21.003 16:48:05 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:21.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.003 --rc genhtml_branch_coverage=1 00:21:21.003 --rc genhtml_function_coverage=1 00:21:21.003 --rc genhtml_legend=1 00:21:21.003 --rc geninfo_all_blocks=1 00:21:21.003 --rc geninfo_unexecuted_blocks=1 00:21:21.003 00:21:21.003 ' 00:21:21.003 16:48:05 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:21.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.003 --rc genhtml_branch_coverage=1 00:21:21.003 --rc genhtml_function_coverage=1 00:21:21.003 --rc genhtml_legend=1 00:21:21.003 --rc geninfo_all_blocks=1 00:21:21.003 --rc geninfo_unexecuted_blocks=1 00:21:21.003 00:21:21.003 ' 00:21:21.003 16:48:05 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:21.003 16:48:05 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:21:21.004 16:48:05 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:21.004 16:48:05 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:21.004 16:48:05 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:21:21.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.9oqmXDh44V 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=76912 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 76912 00:21:21.261 16:48:05 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 76912 ']' 00:21:21.261 16:48:05 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.261 16:48:05 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:21.261 16:48:05 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.261 16:48:05 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:21.261 16:48:05 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:21.261 16:48:05 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:21.261 [2024-11-20 16:48:05.972438] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:21:21.261 [2024-11-20 16:48:05.972729] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76912 ] 00:21:21.261 [2024-11-20 16:48:06.128719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.519 [2024-11-20 16:48:06.237538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.084 16:48:06 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:22.084 16:48:06 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:21:22.084 16:48:06 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:22.084 16:48:06 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:21:22.084 16:48:06 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:22.084 16:48:06 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:21:22.084 16:48:06 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:21:22.084 16:48:06 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:22.342 16:48:07 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:22.342 16:48:07 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:21:22.342 16:48:07 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:22.342 16:48:07 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:22.342 16:48:07 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:22.342 16:48:07 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:22.342 16:48:07 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:22.342 16:48:07 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:22.601 16:48:07 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:22.601 { 00:21:22.601 "name": "nvme0n1", 00:21:22.601 "aliases": [ 00:21:22.601 "aa900d90-b6d7-4e77-95ec-eb8f03e1d924" 00:21:22.601 ], 00:21:22.601 "product_name": "NVMe disk", 00:21:22.601 "block_size": 4096, 00:21:22.601 "num_blocks": 1310720, 00:21:22.601 "uuid": "aa900d90-b6d7-4e77-95ec-eb8f03e1d924", 00:21:22.601 "numa_id": -1, 00:21:22.601 "assigned_rate_limits": { 00:21:22.601 "rw_ios_per_sec": 0, 00:21:22.601 "rw_mbytes_per_sec": 0, 00:21:22.601 "r_mbytes_per_sec": 0, 00:21:22.601 "w_mbytes_per_sec": 0 00:21:22.601 }, 00:21:22.601 "claimed": true, 00:21:22.601 "claim_type": "read_many_write_one", 00:21:22.601 "zoned": false, 00:21:22.601 "supported_io_types": { 00:21:22.601 "read": true, 00:21:22.601 "write": true, 00:21:22.601 "unmap": true, 00:21:22.601 "flush": true, 00:21:22.601 "reset": true, 00:21:22.601 "nvme_admin": true, 00:21:22.601 "nvme_io": true, 00:21:22.601 "nvme_io_md": false, 00:21:22.601 "write_zeroes": true, 00:21:22.601 "zcopy": false, 00:21:22.601 "get_zone_info": false, 00:21:22.601 "zone_management": false, 00:21:22.601 "zone_append": false, 00:21:22.601 "compare": true, 00:21:22.601 "compare_and_write": false, 00:21:22.601 "abort": true, 00:21:22.601 "seek_hole": false, 00:21:22.601 "seek_data": false, 00:21:22.601 "copy": true, 00:21:22.601 "nvme_iov_md": false 00:21:22.601 }, 00:21:22.601 "driver_specific": { 00:21:22.601 "nvme": [ 00:21:22.601 { 00:21:22.601 "pci_address": "0000:00:11.0", 00:21:22.601 "trid": { 00:21:22.601 "trtype": "PCIe", 00:21:22.601 "traddr": "0000:00:11.0" 00:21:22.601 }, 00:21:22.601 "ctrlr_data": { 00:21:22.601 "cntlid": 0, 00:21:22.601 "vendor_id": "0x1b36", 00:21:22.601 "model_number": "QEMU NVMe Ctrl", 00:21:22.601 "serial_number": "12341", 00:21:22.601 "firmware_revision": "8.0.0", 00:21:22.601 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:22.601 "oacs": { 00:21:22.601 "security": 0, 00:21:22.601 "format": 1, 00:21:22.601 "firmware": 0, 00:21:22.601 "ns_manage": 1 00:21:22.601 }, 00:21:22.601 "multi_ctrlr": false, 00:21:22.601 "ana_reporting": false 00:21:22.601 }, 00:21:22.601 "vs": { 00:21:22.601 "nvme_version": "1.4" 00:21:22.601 }, 00:21:22.601 "ns_data": { 00:21:22.601 "id": 1, 00:21:22.601 "can_share": false 00:21:22.601 } 00:21:22.601 } 00:21:22.601 ], 00:21:22.601 "mp_policy": "active_passive" 00:21:22.601 } 00:21:22.601 } 00:21:22.601 ]' 00:21:22.601 16:48:07 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:22.601 16:48:07 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:22.601 16:48:07 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:22.601 16:48:07 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:22.601 16:48:07 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:22.601 16:48:07 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:21:22.601 16:48:07 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:21:22.601 16:48:07 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:22.601 16:48:07 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:21:22.601 16:48:07 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:22.601 16:48:07 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:22.860 16:48:07 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=96a65a34-ad3f-422a-8e34-241542fe539d 00:21:22.860 16:48:07 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:21:22.860 16:48:07 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 96a65a34-ad3f-422a-8e34-241542fe539d 00:21:23.120 16:48:07 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:23.390 16:48:08 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=b9e27ec3-45fb-4e15-b0e7-2e8d76cf906d 00:21:23.390 16:48:08 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b9e27ec3-45fb-4e15-b0e7-2e8d76cf906d 00:21:23.648 16:48:08 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=23d8c53a-ebb3-4bc5-961e-406a325001c9 00:21:23.648 16:48:08 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:21:23.648 16:48:08 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 23d8c53a-ebb3-4bc5-961e-406a325001c9 00:21:23.648 16:48:08 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:21:23.648 16:48:08 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:23.648 16:48:08 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=23d8c53a-ebb3-4bc5-961e-406a325001c9 00:21:23.648 16:48:08 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:21:23.648 16:48:08 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 23d8c53a-ebb3-4bc5-961e-406a325001c9 00:21:23.648 16:48:08 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=23d8c53a-ebb3-4bc5-961e-406a325001c9 00:21:23.648 16:48:08 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:23.648 16:48:08 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:23.648 16:48:08 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:23.648 16:48:08 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 23d8c53a-ebb3-4bc5-961e-406a325001c9 00:21:23.907 16:48:08 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:23.907 { 00:21:23.907 "name": "23d8c53a-ebb3-4bc5-961e-406a325001c9", 00:21:23.907 "aliases": [ 00:21:23.907 "lvs/nvme0n1p0" 00:21:23.907 ], 00:21:23.907 "product_name": "Logical Volume", 00:21:23.907 "block_size": 4096, 00:21:23.907 "num_blocks": 26476544, 00:21:23.907 "uuid": "23d8c53a-ebb3-4bc5-961e-406a325001c9", 00:21:23.907 "assigned_rate_limits": { 00:21:23.907 "rw_ios_per_sec": 0, 00:21:23.907 "rw_mbytes_per_sec": 0, 00:21:23.907 "r_mbytes_per_sec": 0, 00:21:23.907 "w_mbytes_per_sec": 0 00:21:23.907 }, 00:21:23.907 "claimed": false, 00:21:23.907 "zoned": false, 00:21:23.907 "supported_io_types": { 00:21:23.907 "read": true, 00:21:23.907 "write": true, 00:21:23.907 "unmap": true, 00:21:23.907 "flush": false, 00:21:23.907 "reset": true, 00:21:23.907 "nvme_admin": false, 00:21:23.907 "nvme_io": false, 00:21:23.907 "nvme_io_md": false, 00:21:23.907 "write_zeroes": true, 00:21:23.907 "zcopy": false, 00:21:23.907 "get_zone_info": false, 00:21:23.907 "zone_management": false, 00:21:23.907 "zone_append": false, 00:21:23.907 "compare": false, 00:21:23.907 "compare_and_write": false, 00:21:23.907 "abort": false, 00:21:23.907 "seek_hole": true, 00:21:23.907 "seek_data": true, 00:21:23.907 "copy": false, 00:21:23.907 "nvme_iov_md": false 00:21:23.907 }, 00:21:23.907 "driver_specific": { 00:21:23.907 "lvol": { 00:21:23.908 "lvol_store_uuid": "b9e27ec3-45fb-4e15-b0e7-2e8d76cf906d", 00:21:23.908 "base_bdev": "nvme0n1", 00:21:23.908 "thin_provision": true, 00:21:23.908 "num_allocated_clusters": 0, 00:21:23.908 "snapshot": false, 00:21:23.908 "clone": false, 00:21:23.908 "esnap_clone": false 00:21:23.908 } 00:21:23.908 } 00:21:23.908 } 00:21:23.908 ]' 00:21:23.908 16:48:08 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:23.908 16:48:08 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:23.908 16:48:08 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:23.908 16:48:08 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:23.908 16:48:08 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:23.908 16:48:08 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:23.908 16:48:08 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:21:23.908 16:48:08 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:21:23.908 16:48:08 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:24.165 16:48:08 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:24.165 16:48:09 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:24.165 16:48:09 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 23d8c53a-ebb3-4bc5-961e-406a325001c9 00:21:24.165 16:48:09 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=23d8c53a-ebb3-4bc5-961e-406a325001c9 00:21:24.165 16:48:09 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:24.165 16:48:09 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:24.165 16:48:09 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:24.165 16:48:09 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 23d8c53a-ebb3-4bc5-961e-406a325001c9 00:21:24.423 16:48:09 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:24.423 { 00:21:24.423 "name": "23d8c53a-ebb3-4bc5-961e-406a325001c9", 00:21:24.423 "aliases": [ 00:21:24.423 "lvs/nvme0n1p0" 00:21:24.423 ], 00:21:24.423 "product_name": "Logical Volume", 00:21:24.423 "block_size": 4096, 00:21:24.423 "num_blocks": 26476544, 00:21:24.423 "uuid": "23d8c53a-ebb3-4bc5-961e-406a325001c9", 00:21:24.423 "assigned_rate_limits": { 00:21:24.423 "rw_ios_per_sec": 0, 00:21:24.423 "rw_mbytes_per_sec": 0, 00:21:24.423 "r_mbytes_per_sec": 0, 00:21:24.423 "w_mbytes_per_sec": 0 00:21:24.423 }, 00:21:24.423 "claimed": false, 00:21:24.423 "zoned": false, 00:21:24.423 "supported_io_types": { 00:21:24.423 "read": true, 00:21:24.423 "write": true, 00:21:24.423 "unmap": true, 00:21:24.423 "flush": false, 00:21:24.423 "reset": true, 00:21:24.423 "nvme_admin": false, 00:21:24.423 "nvme_io": false, 00:21:24.423 "nvme_io_md": false, 00:21:24.423 "write_zeroes": true, 00:21:24.423 "zcopy": false, 00:21:24.423 "get_zone_info": false, 00:21:24.423 "zone_management": false, 00:21:24.423 "zone_append": false, 00:21:24.423 "compare": false, 00:21:24.423 "compare_and_write": false, 00:21:24.423 "abort": false, 00:21:24.423 "seek_hole": true, 00:21:24.423 "seek_data": true, 00:21:24.423 "copy": false, 00:21:24.423 "nvme_iov_md": false 00:21:24.423 }, 00:21:24.423 "driver_specific": { 00:21:24.423 "lvol": { 00:21:24.423 "lvol_store_uuid": "b9e27ec3-45fb-4e15-b0e7-2e8d76cf906d", 00:21:24.423 "base_bdev": "nvme0n1", 00:21:24.423 "thin_provision": true, 00:21:24.423 "num_allocated_clusters": 0, 00:21:24.423 "snapshot": false, 00:21:24.423 "clone": false, 00:21:24.423 "esnap_clone": false 00:21:24.423 } 00:21:24.423 } 00:21:24.423 } 00:21:24.423 ]' 00:21:24.423 16:48:09 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:24.423 16:48:09 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:24.423 16:48:09 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:24.423 16:48:09 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:24.423 16:48:09 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:24.423 16:48:09 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:24.423 16:48:09 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:21:24.423 16:48:09 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:24.681 16:48:09 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:21:24.681 16:48:09 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 23d8c53a-ebb3-4bc5-961e-406a325001c9 00:21:24.681 16:48:09 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=23d8c53a-ebb3-4bc5-961e-406a325001c9 00:21:24.681 16:48:09 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:24.681 16:48:09 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:24.681 16:48:09 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:24.681 16:48:09 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 23d8c53a-ebb3-4bc5-961e-406a325001c9 00:21:24.938 16:48:09 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:24.938 { 00:21:24.938 "name": "23d8c53a-ebb3-4bc5-961e-406a325001c9", 00:21:24.938 "aliases": [ 00:21:24.938 "lvs/nvme0n1p0" 00:21:24.938 ], 00:21:24.938 "product_name": "Logical Volume", 00:21:24.938 "block_size": 4096, 00:21:24.938 "num_blocks": 26476544, 00:21:24.938 "uuid": "23d8c53a-ebb3-4bc5-961e-406a325001c9", 00:21:24.938 "assigned_rate_limits": { 00:21:24.938 "rw_ios_per_sec": 0, 00:21:24.938 "rw_mbytes_per_sec": 0, 00:21:24.938 "r_mbytes_per_sec": 0, 00:21:24.938 "w_mbytes_per_sec": 0 00:21:24.938 }, 00:21:24.938 "claimed": false, 00:21:24.938 "zoned": false, 00:21:24.938 "supported_io_types": { 00:21:24.938 "read": true, 00:21:24.938 "write": true, 00:21:24.938 "unmap": true, 00:21:24.938 "flush": false, 00:21:24.938 "reset": true, 00:21:24.938 "nvme_admin": false, 00:21:24.938 "nvme_io": false, 00:21:24.938 "nvme_io_md": false, 00:21:24.938 "write_zeroes": true, 00:21:24.938 "zcopy": false, 00:21:24.938 "get_zone_info": false, 00:21:24.938 "zone_management": false, 00:21:24.938 "zone_append": false, 00:21:24.938 "compare": false, 00:21:24.938 "compare_and_write": false, 00:21:24.938 "abort": false, 00:21:24.938 "seek_hole": true, 00:21:24.938 "seek_data": true, 00:21:24.938 "copy": false, 00:21:24.939 "nvme_iov_md": false 00:21:24.939 }, 00:21:24.939 "driver_specific": { 00:21:24.939 "lvol": { 00:21:24.939 "lvol_store_uuid": "b9e27ec3-45fb-4e15-b0e7-2e8d76cf906d", 00:21:24.939 "base_bdev": "nvme0n1", 00:21:24.939 "thin_provision": true, 00:21:24.939 "num_allocated_clusters": 0, 00:21:24.939 "snapshot": false, 00:21:24.939 "clone": false, 00:21:24.939 "esnap_clone": false 00:21:24.939 } 00:21:24.939 } 00:21:24.939 } 00:21:24.939 ]' 00:21:24.939 16:48:09 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:24.939 16:48:09 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:24.939 16:48:09 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:24.939 16:48:09 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:24.939 16:48:09 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:24.939 16:48:09 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:24.939 16:48:09 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:21:24.939 16:48:09 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 23d8c53a-ebb3-4bc5-961e-406a325001c9 --l2p_dram_limit 10' 00:21:24.939 16:48:09 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:21:24.939 16:48:09 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:21:24.939 16:48:09 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:24.939 16:48:09 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:21:24.939 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:21:24.939 16:48:09 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 23d8c53a-ebb3-4bc5-961e-406a325001c9 --l2p_dram_limit 10 -c nvc0n1p0 00:21:25.197 [2024-11-20 16:48:09.942093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.197 [2024-11-20 16:48:09.942156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:25.197 [2024-11-20 16:48:09.942173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:25.197 [2024-11-20 16:48:09.942182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.197 [2024-11-20 16:48:09.942243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.197 [2024-11-20 16:48:09.942255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:25.197 [2024-11-20 16:48:09.942266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:25.197 [2024-11-20 16:48:09.942273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.197 [2024-11-20 16:48:09.942298] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:25.197 [2024-11-20 16:48:09.943015] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:25.197 [2024-11-20 16:48:09.943150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.197 [2024-11-20 16:48:09.943176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:25.197 [2024-11-20 16:48:09.943187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.855 ms 00:21:25.197 [2024-11-20 16:48:09.943194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.197 [2024-11-20 16:48:09.943263] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ebc7f13a-85a4-4ff7-8e91-c07828f0f9c4 00:21:25.197 [2024-11-20 16:48:09.944343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.197 [2024-11-20 16:48:09.944371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:25.197 [2024-11-20 16:48:09.944392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:21:25.197 [2024-11-20 16:48:09.944401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.197 [2024-11-20 16:48:09.949813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.197 [2024-11-20 16:48:09.949944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:25.197 [2024-11-20 16:48:09.949961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.369 ms 00:21:25.197 [2024-11-20 16:48:09.949971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.197 [2024-11-20 16:48:09.950058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.197 [2024-11-20 16:48:09.950069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:25.197 [2024-11-20 16:48:09.950077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:25.197 [2024-11-20 16:48:09.950088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.197 [2024-11-20 16:48:09.950126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.197 [2024-11-20 16:48:09.950136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:25.197 [2024-11-20 16:48:09.950145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:25.197 [2024-11-20 16:48:09.950155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.197 [2024-11-20 16:48:09.950176] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:25.197 [2024-11-20 16:48:09.953758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.197 [2024-11-20 16:48:09.953791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:25.197 [2024-11-20 16:48:09.953804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.585 ms 00:21:25.197 [2024-11-20 16:48:09.953811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.197 [2024-11-20 16:48:09.953843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.197 [2024-11-20 16:48:09.953851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:25.197 [2024-11-20 16:48:09.953861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:25.197 [2024-11-20 16:48:09.953868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.197 [2024-11-20 16:48:09.953900] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:25.197 [2024-11-20 16:48:09.954034] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:25.197 [2024-11-20 16:48:09.954049] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:25.197 [2024-11-20 16:48:09.954059] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:25.197 [2024-11-20 16:48:09.954071] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:25.197 [2024-11-20 16:48:09.954080] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:25.197 [2024-11-20 16:48:09.954088] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:25.197 [2024-11-20 16:48:09.954096] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:25.197 [2024-11-20 16:48:09.954106] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:25.197 [2024-11-20 16:48:09.954113] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:25.197 [2024-11-20 16:48:09.954122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.197 [2024-11-20 16:48:09.954129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:25.197 [2024-11-20 16:48:09.954138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:21:25.197 [2024-11-20 16:48:09.954151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.197 [2024-11-20 16:48:09.954237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.197 [2024-11-20 16:48:09.954245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:25.197 [2024-11-20 16:48:09.954253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:25.197 [2024-11-20 16:48:09.954260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.197 [2024-11-20 16:48:09.954374] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:25.197 [2024-11-20 16:48:09.954403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:25.197 [2024-11-20 16:48:09.954413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:25.197 [2024-11-20 16:48:09.954420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:25.197 [2024-11-20 16:48:09.954429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:25.197 [2024-11-20 16:48:09.954436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:25.197 [2024-11-20 16:48:09.954444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:25.197 [2024-11-20 16:48:09.954451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:25.197 [2024-11-20 16:48:09.954459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:25.197 [2024-11-20 16:48:09.954466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:25.197 [2024-11-20 16:48:09.954474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:25.197 [2024-11-20 16:48:09.954481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:25.197 [2024-11-20 16:48:09.954490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:25.197 [2024-11-20 16:48:09.954499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:25.197 [2024-11-20 16:48:09.954515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:25.197 [2024-11-20 16:48:09.954522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:25.197 [2024-11-20 16:48:09.954532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:25.197 [2024-11-20 16:48:09.954539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:25.197 [2024-11-20 16:48:09.954548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:25.197 [2024-11-20 16:48:09.954555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:25.197 [2024-11-20 16:48:09.954563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:25.197 [2024-11-20 16:48:09.954569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:25.197 [2024-11-20 16:48:09.954577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:25.197 [2024-11-20 16:48:09.954584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:25.197 [2024-11-20 16:48:09.954592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:25.197 [2024-11-20 16:48:09.954598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:25.197 [2024-11-20 16:48:09.954606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:25.197 [2024-11-20 16:48:09.954612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:25.197 [2024-11-20 16:48:09.954620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:25.197 [2024-11-20 16:48:09.954627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:25.197 [2024-11-20 16:48:09.954635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:25.197 [2024-11-20 16:48:09.954641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:25.197 [2024-11-20 16:48:09.954651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:25.197 [2024-11-20 16:48:09.954657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:25.197 [2024-11-20 16:48:09.954665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:25.197 [2024-11-20 16:48:09.954671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:25.197 [2024-11-20 16:48:09.954679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:25.197 [2024-11-20 16:48:09.954685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:25.197 [2024-11-20 16:48:09.954693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:25.197 [2024-11-20 16:48:09.954699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:25.197 [2024-11-20 16:48:09.954707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:25.197 [2024-11-20 16:48:09.954713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:25.197 [2024-11-20 16:48:09.954721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:25.198 [2024-11-20 16:48:09.954727] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:25.198 [2024-11-20 16:48:09.954736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:25.198 [2024-11-20 16:48:09.954744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:25.198 [2024-11-20 16:48:09.954753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:25.198 [2024-11-20 16:48:09.954760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:25.198 [2024-11-20 16:48:09.954770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:25.198 [2024-11-20 16:48:09.954777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:25.198 [2024-11-20 16:48:09.954785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:25.198 [2024-11-20 16:48:09.954791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:25.198 [2024-11-20 16:48:09.954799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:25.198 [2024-11-20 16:48:09.954810] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:25.198 [2024-11-20 16:48:09.954820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:25.198 [2024-11-20 16:48:09.954829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:25.198 [2024-11-20 16:48:09.954838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:25.198 [2024-11-20 16:48:09.954845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:25.198 [2024-11-20 16:48:09.954854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:25.198 [2024-11-20 16:48:09.954861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:25.198 [2024-11-20 16:48:09.954873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:25.198 [2024-11-20 16:48:09.954880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:25.198 [2024-11-20 16:48:09.954888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:25.198 [2024-11-20 16:48:09.954895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:25.198 [2024-11-20 16:48:09.954905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:25.198 [2024-11-20 16:48:09.954911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:25.198 [2024-11-20 16:48:09.954920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:25.198 [2024-11-20 16:48:09.954926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:25.198 [2024-11-20 16:48:09.954936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:25.198 [2024-11-20 16:48:09.954943] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:25.198 [2024-11-20 16:48:09.954952] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:25.198 [2024-11-20 16:48:09.954960] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:25.198 [2024-11-20 16:48:09.954968] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:25.198 [2024-11-20 16:48:09.954975] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:25.198 [2024-11-20 16:48:09.954983] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:25.198 [2024-11-20 16:48:09.954990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.198 [2024-11-20 16:48:09.954999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:25.198 [2024-11-20 16:48:09.955007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:21:25.198 [2024-11-20 16:48:09.955015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.198 [2024-11-20 16:48:09.955053] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:25.198 [2024-11-20 16:48:09.955065] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:27.748 [2024-11-20 16:48:12.444297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.748 [2024-11-20 16:48:12.444360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:27.748 [2024-11-20 16:48:12.444375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2489.234 ms 00:21:27.748 [2024-11-20 16:48:12.444397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.748 [2024-11-20 16:48:12.470449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.748 [2024-11-20 16:48:12.470498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:27.748 [2024-11-20 16:48:12.470511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.754 ms 00:21:27.748 [2024-11-20 16:48:12.470521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.749 [2024-11-20 16:48:12.470676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.749 [2024-11-20 16:48:12.470688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:27.749 [2024-11-20 16:48:12.470697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:21:27.749 [2024-11-20 16:48:12.470708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.749 [2024-11-20 16:48:12.501471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.749 [2024-11-20 16:48:12.501517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:27.749 [2024-11-20 16:48:12.501529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.723 ms 00:21:27.749 [2024-11-20 16:48:12.501539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.749 [2024-11-20 16:48:12.501575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.749 [2024-11-20 16:48:12.501590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:27.749 [2024-11-20 16:48:12.501603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:27.749 [2024-11-20 16:48:12.501616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.749 [2024-11-20 16:48:12.501990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.749 [2024-11-20 16:48:12.502014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:27.749 [2024-11-20 16:48:12.502023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:21:27.749 [2024-11-20 16:48:12.502033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.749 [2024-11-20 16:48:12.502150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.749 [2024-11-20 16:48:12.502164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:27.749 [2024-11-20 16:48:12.502174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:21:27.749 [2024-11-20 16:48:12.502185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.749 [2024-11-20 16:48:12.516691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.749 [2024-11-20 16:48:12.516734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:27.749 [2024-11-20 16:48:12.516746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.486 ms 00:21:27.749 [2024-11-20 16:48:12.516756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.749 [2024-11-20 16:48:12.528092] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:27.749 [2024-11-20 16:48:12.530981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.749 [2024-11-20 16:48:12.531013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:27.749 [2024-11-20 16:48:12.531026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.133 ms 00:21:27.749 [2024-11-20 16:48:12.531036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.749 [2024-11-20 16:48:12.601574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.749 [2024-11-20 16:48:12.601776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:27.749 [2024-11-20 16:48:12.601853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.495 ms 00:21:27.749 [2024-11-20 16:48:12.601866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.749 [2024-11-20 16:48:12.602049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.749 [2024-11-20 16:48:12.602063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:27.749 [2024-11-20 16:48:12.602076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:21:27.749 [2024-11-20 16:48:12.602084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.749 [2024-11-20 16:48:12.625283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.749 [2024-11-20 16:48:12.625472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:27.749 [2024-11-20 16:48:12.625495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.142 ms 00:21:27.749 [2024-11-20 16:48:12.625503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.007 [2024-11-20 16:48:12.647812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.007 [2024-11-20 16:48:12.647854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:28.007 [2024-11-20 16:48:12.647869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.264 ms 00:21:28.007 [2024-11-20 16:48:12.647878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.007 [2024-11-20 16:48:12.648458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.007 [2024-11-20 16:48:12.648474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:28.007 [2024-11-20 16:48:12.648484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:21:28.007 [2024-11-20 16:48:12.648492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.007 [2024-11-20 16:48:12.716118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.007 [2024-11-20 16:48:12.716173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:28.007 [2024-11-20 16:48:12.716190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.560 ms 00:21:28.007 [2024-11-20 16:48:12.716199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.007 [2024-11-20 16:48:12.740442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.007 [2024-11-20 16:48:12.740496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:28.007 [2024-11-20 16:48:12.740510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.158 ms 00:21:28.007 [2024-11-20 16:48:12.740518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.007 [2024-11-20 16:48:12.764042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.007 [2024-11-20 16:48:12.764234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:28.007 [2024-11-20 16:48:12.764255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.479 ms 00:21:28.007 [2024-11-20 16:48:12.764262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.007 [2024-11-20 16:48:12.787491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.007 [2024-11-20 16:48:12.787535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:28.007 [2024-11-20 16:48:12.787548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.185 ms 00:21:28.007 [2024-11-20 16:48:12.787556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.007 [2024-11-20 16:48:12.787602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.007 [2024-11-20 16:48:12.787611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:28.007 [2024-11-20 16:48:12.787624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:28.007 [2024-11-20 16:48:12.787632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.007 [2024-11-20 16:48:12.787714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.007 [2024-11-20 16:48:12.787723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:28.007 [2024-11-20 16:48:12.787736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:21:28.007 [2024-11-20 16:48:12.787743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.007 [2024-11-20 16:48:12.788634] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2846.099 ms, result 0 00:21:28.007 { 00:21:28.007 "name": "ftl0", 00:21:28.007 "uuid": "ebc7f13a-85a4-4ff7-8e91-c07828f0f9c4" 00:21:28.007 } 00:21:28.007 16:48:12 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:21:28.007 16:48:12 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:28.266 16:48:13 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:21:28.266 16:48:13 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:28.524 [2024-11-20 16:48:13.260326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.524 [2024-11-20 16:48:13.260554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:28.524 [2024-11-20 16:48:13.260627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:28.524 [2024-11-20 16:48:13.260660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.524 [2024-11-20 16:48:13.260702] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:28.524 [2024-11-20 16:48:13.263375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.524 [2024-11-20 16:48:13.263499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:28.524 [2024-11-20 16:48:13.263559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.589 ms 00:21:28.524 [2024-11-20 16:48:13.263583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.524 [2024-11-20 16:48:13.263891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.524 [2024-11-20 16:48:13.263965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:28.524 [2024-11-20 16:48:13.264023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:21:28.524 [2024-11-20 16:48:13.264046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.524 [2024-11-20 16:48:13.267625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.524 [2024-11-20 16:48:13.267705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:28.524 [2024-11-20 16:48:13.267762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.430 ms 00:21:28.524 [2024-11-20 16:48:13.267785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.524 [2024-11-20 16:48:13.273941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.524 [2024-11-20 16:48:13.274050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:28.524 [2024-11-20 16:48:13.274112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.075 ms 00:21:28.524 [2024-11-20 16:48:13.274135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.524 [2024-11-20 16:48:13.298179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.524 [2024-11-20 16:48:13.298321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:28.524 [2024-11-20 16:48:13.298426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.967 ms 00:21:28.524 [2024-11-20 16:48:13.298451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.524 [2024-11-20 16:48:13.312764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.524 [2024-11-20 16:48:13.312901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:28.524 [2024-11-20 16:48:13.312965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.257 ms 00:21:28.524 [2024-11-20 16:48:13.312988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.524 [2024-11-20 16:48:13.313148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.524 [2024-11-20 16:48:13.313183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:28.524 [2024-11-20 16:48:13.313207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:21:28.524 [2024-11-20 16:48:13.313266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.524 [2024-11-20 16:48:13.336627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.524 [2024-11-20 16:48:13.336663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:28.524 [2024-11-20 16:48:13.336676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.336 ms 00:21:28.524 [2024-11-20 16:48:13.336684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.524 [2024-11-20 16:48:13.368537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.524 [2024-11-20 16:48:13.368572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:28.524 [2024-11-20 16:48:13.368584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.815 ms 00:21:28.524 [2024-11-20 16:48:13.368592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.524 [2024-11-20 16:48:13.390340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.524 [2024-11-20 16:48:13.390397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:28.524 [2024-11-20 16:48:13.390411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.708 ms 00:21:28.524 [2024-11-20 16:48:13.390419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.783 [2024-11-20 16:48:13.412325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.783 [2024-11-20 16:48:13.412362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:28.783 [2024-11-20 16:48:13.412375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.828 ms 00:21:28.783 [2024-11-20 16:48:13.412396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.783 [2024-11-20 16:48:13.412435] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:28.783 [2024-11-20 16:48:13.412450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:28.783 [2024-11-20 16:48:13.412462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:28.783 [2024-11-20 16:48:13.412470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:28.783 [2024-11-20 16:48:13.412480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:28.783 [2024-11-20 16:48:13.412487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:28.783 [2024-11-20 16:48:13.412496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:28.783 [2024-11-20 16:48:13.412503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:28.783 [2024-11-20 16:48:13.412515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:28.783 [2024-11-20 16:48:13.412523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:28.783 [2024-11-20 16:48:13.412531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:28.783 [2024-11-20 16:48:13.412539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:28.783 [2024-11-20 16:48:13.412548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:28.783 [2024-11-20 16:48:13.412555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.412991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:28.784 [2024-11-20 16:48:13.413311] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:28.784 [2024-11-20 16:48:13.413323] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ebc7f13a-85a4-4ff7-8e91-c07828f0f9c4 00:21:28.784 [2024-11-20 16:48:13.413330] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:28.785 [2024-11-20 16:48:13.413341] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:28.785 [2024-11-20 16:48:13.413348] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:28.785 [2024-11-20 16:48:13.413360] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:28.785 [2024-11-20 16:48:13.413366] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:28.785 [2024-11-20 16:48:13.413395] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:28.785 [2024-11-20 16:48:13.413403] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:28.785 [2024-11-20 16:48:13.413412] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:28.785 [2024-11-20 16:48:13.413418] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:28.785 [2024-11-20 16:48:13.413426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.785 [2024-11-20 16:48:13.413434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:28.785 [2024-11-20 16:48:13.413444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:21:28.785 [2024-11-20 16:48:13.413451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.785 [2024-11-20 16:48:13.426192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.785 [2024-11-20 16:48:13.426309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:28.785 [2024-11-20 16:48:13.426361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.703 ms 00:21:28.785 [2024-11-20 16:48:13.426452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.785 [2024-11-20 16:48:13.426819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.785 [2024-11-20 16:48:13.426895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:28.785 [2024-11-20 16:48:13.426949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:21:28.785 [2024-11-20 16:48:13.426973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.785 [2024-11-20 16:48:13.468461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.785 [2024-11-20 16:48:13.468593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:28.785 [2024-11-20 16:48:13.468649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.785 [2024-11-20 16:48:13.468672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.785 [2024-11-20 16:48:13.468776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.785 [2024-11-20 16:48:13.468802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:28.785 [2024-11-20 16:48:13.468843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.785 [2024-11-20 16:48:13.468868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.785 [2024-11-20 16:48:13.468976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.785 [2024-11-20 16:48:13.469044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:28.785 [2024-11-20 16:48:13.469070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.785 [2024-11-20 16:48:13.469089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.785 [2024-11-20 16:48:13.469147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.785 [2024-11-20 16:48:13.469171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:28.785 [2024-11-20 16:48:13.469192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.785 [2024-11-20 16:48:13.469242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.785 [2024-11-20 16:48:13.545194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.785 [2024-11-20 16:48:13.545328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:28.785 [2024-11-20 16:48:13.545391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.785 [2024-11-20 16:48:13.545415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.785 [2024-11-20 16:48:13.607803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.785 [2024-11-20 16:48:13.607931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:28.785 [2024-11-20 16:48:13.607988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.785 [2024-11-20 16:48:13.608014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.785 [2024-11-20 16:48:13.608104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.785 [2024-11-20 16:48:13.608147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:28.785 [2024-11-20 16:48:13.608197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.785 [2024-11-20 16:48:13.608220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.785 [2024-11-20 16:48:13.608300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.785 [2024-11-20 16:48:13.608416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:28.785 [2024-11-20 16:48:13.608444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.785 [2024-11-20 16:48:13.608464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.785 [2024-11-20 16:48:13.608579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.785 [2024-11-20 16:48:13.608610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:28.785 [2024-11-20 16:48:13.608644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.785 [2024-11-20 16:48:13.608663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.785 [2024-11-20 16:48:13.608741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.785 [2024-11-20 16:48:13.608768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:28.785 [2024-11-20 16:48:13.608789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.785 [2024-11-20 16:48:13.608850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.785 [2024-11-20 16:48:13.608904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.785 [2024-11-20 16:48:13.608932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:28.785 [2024-11-20 16:48:13.609006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.785 [2024-11-20 16:48:13.609029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.785 [2024-11-20 16:48:13.609087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.785 [2024-11-20 16:48:13.609159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:28.785 [2024-11-20 16:48:13.609208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.785 [2024-11-20 16:48:13.609231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.785 [2024-11-20 16:48:13.609375] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 349.016 ms, result 0 00:21:28.785 true 00:21:28.785 16:48:13 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 76912 00:21:28.785 16:48:13 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 76912 ']' 00:21:28.785 16:48:13 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 76912 00:21:28.785 16:48:13 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:21:28.785 16:48:13 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:28.785 16:48:13 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76912 00:21:28.785 killing process with pid 76912 00:21:28.785 16:48:13 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:28.785 16:48:13 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:28.785 16:48:13 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76912' 00:21:28.785 16:48:13 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 76912 00:21:28.785 16:48:13 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 76912 00:21:38.750 16:48:23 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:21:44.014 262144+0 records in 00:21:44.014 262144+0 records out 00:21:44.014 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.37606 s, 245 MB/s 00:21:44.014 16:48:27 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:44.945 16:48:29 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:44.945 [2024-11-20 16:48:29.645661] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:21:44.945 [2024-11-20 16:48:29.645962] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77133 ] 00:21:44.945 [2024-11-20 16:48:29.804740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.203 [2024-11-20 16:48:29.908056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.461 [2024-11-20 16:48:30.167496] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:45.461 [2024-11-20 16:48:30.167562] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:45.461 [2024-11-20 16:48:30.321217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.461 [2024-11-20 16:48:30.321449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:45.461 [2024-11-20 16:48:30.321477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:45.461 [2024-11-20 16:48:30.321487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.461 [2024-11-20 16:48:30.321541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.461 [2024-11-20 16:48:30.321557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:45.461 [2024-11-20 16:48:30.321574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:21:45.461 [2024-11-20 16:48:30.321586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.461 [2024-11-20 16:48:30.321608] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:45.461 [2024-11-20 16:48:30.322416] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:45.461 [2024-11-20 16:48:30.322449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.461 [2024-11-20 16:48:30.322458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:45.461 [2024-11-20 16:48:30.322466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.846 ms 00:21:45.462 [2024-11-20 16:48:30.322474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.462 [2024-11-20 16:48:30.323608] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:45.462 [2024-11-20 16:48:30.336279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.462 [2024-11-20 16:48:30.336316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:45.462 [2024-11-20 16:48:30.336334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.672 ms 00:21:45.462 [2024-11-20 16:48:30.336343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.462 [2024-11-20 16:48:30.336422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.462 [2024-11-20 16:48:30.336432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:45.462 [2024-11-20 16:48:30.336441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:45.462 [2024-11-20 16:48:30.336448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.462 [2024-11-20 16:48:30.341868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.462 [2024-11-20 16:48:30.341911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:45.462 [2024-11-20 16:48:30.341921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.354 ms 00:21:45.462 [2024-11-20 16:48:30.341932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.462 [2024-11-20 16:48:30.342025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.462 [2024-11-20 16:48:30.342034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:45.462 [2024-11-20 16:48:30.342046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:21:45.462 [2024-11-20 16:48:30.342053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.462 [2024-11-20 16:48:30.342097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.462 [2024-11-20 16:48:30.342107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:45.462 [2024-11-20 16:48:30.342114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:45.462 [2024-11-20 16:48:30.342122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.462 [2024-11-20 16:48:30.342149] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:45.721 [2024-11-20 16:48:30.345693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.721 [2024-11-20 16:48:30.345721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:45.721 [2024-11-20 16:48:30.345730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.550 ms 00:21:45.721 [2024-11-20 16:48:30.345739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.721 [2024-11-20 16:48:30.345773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.721 [2024-11-20 16:48:30.345782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:45.721 [2024-11-20 16:48:30.345790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:45.721 [2024-11-20 16:48:30.345797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.721 [2024-11-20 16:48:30.345815] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:45.721 [2024-11-20 16:48:30.345831] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:45.721 [2024-11-20 16:48:30.345866] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:45.721 [2024-11-20 16:48:30.345884] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:45.721 [2024-11-20 16:48:30.345995] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:45.721 [2024-11-20 16:48:30.346009] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:45.721 [2024-11-20 16:48:30.346019] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:45.721 [2024-11-20 16:48:30.346029] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:45.721 [2024-11-20 16:48:30.346038] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:45.721 [2024-11-20 16:48:30.346047] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:45.721 [2024-11-20 16:48:30.346054] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:45.721 [2024-11-20 16:48:30.346061] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:45.722 [2024-11-20 16:48:30.346068] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:45.722 [2024-11-20 16:48:30.346078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.722 [2024-11-20 16:48:30.346085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:45.722 [2024-11-20 16:48:30.346092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:21:45.722 [2024-11-20 16:48:30.346098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.722 [2024-11-20 16:48:30.346180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.722 [2024-11-20 16:48:30.346187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:45.722 [2024-11-20 16:48:30.346195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:21:45.722 [2024-11-20 16:48:30.346202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.722 [2024-11-20 16:48:30.346320] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:45.722 [2024-11-20 16:48:30.346333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:45.722 [2024-11-20 16:48:30.346341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:45.722 [2024-11-20 16:48:30.346349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:45.722 [2024-11-20 16:48:30.346356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:45.722 [2024-11-20 16:48:30.346363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:45.722 [2024-11-20 16:48:30.346369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:45.722 [2024-11-20 16:48:30.346395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:45.722 [2024-11-20 16:48:30.346403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:45.722 [2024-11-20 16:48:30.346409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:45.722 [2024-11-20 16:48:30.346416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:45.722 [2024-11-20 16:48:30.346423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:45.722 [2024-11-20 16:48:30.346429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:45.722 [2024-11-20 16:48:30.346436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:45.722 [2024-11-20 16:48:30.346443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:45.722 [2024-11-20 16:48:30.346455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:45.722 [2024-11-20 16:48:30.346463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:45.722 [2024-11-20 16:48:30.346469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:45.722 [2024-11-20 16:48:30.346476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:45.722 [2024-11-20 16:48:30.346483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:45.722 [2024-11-20 16:48:30.346490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:45.722 [2024-11-20 16:48:30.346497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:45.722 [2024-11-20 16:48:30.346504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:45.722 [2024-11-20 16:48:30.346510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:45.722 [2024-11-20 16:48:30.346517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:45.722 [2024-11-20 16:48:30.346523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:45.722 [2024-11-20 16:48:30.346530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:45.722 [2024-11-20 16:48:30.346536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:45.722 [2024-11-20 16:48:30.346546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:45.722 [2024-11-20 16:48:30.346558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:45.722 [2024-11-20 16:48:30.346564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:45.722 [2024-11-20 16:48:30.346570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:45.722 [2024-11-20 16:48:30.346577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:45.722 [2024-11-20 16:48:30.346583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:45.722 [2024-11-20 16:48:30.346590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:45.722 [2024-11-20 16:48:30.346596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:45.722 [2024-11-20 16:48:30.346602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:45.722 [2024-11-20 16:48:30.346608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:45.722 [2024-11-20 16:48:30.346615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:45.722 [2024-11-20 16:48:30.346621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:45.722 [2024-11-20 16:48:30.346627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:45.722 [2024-11-20 16:48:30.346634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:45.722 [2024-11-20 16:48:30.346641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:45.722 [2024-11-20 16:48:30.346647] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:45.722 [2024-11-20 16:48:30.346655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:45.722 [2024-11-20 16:48:30.346661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:45.722 [2024-11-20 16:48:30.346669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:45.722 [2024-11-20 16:48:30.346677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:45.722 [2024-11-20 16:48:30.346685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:45.722 [2024-11-20 16:48:30.346691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:45.722 [2024-11-20 16:48:30.346699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:45.722 [2024-11-20 16:48:30.346705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:45.722 [2024-11-20 16:48:30.346712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:45.722 [2024-11-20 16:48:30.346720] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:45.722 [2024-11-20 16:48:30.346728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:45.722 [2024-11-20 16:48:30.346736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:45.722 [2024-11-20 16:48:30.346743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:45.722 [2024-11-20 16:48:30.346750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:45.722 [2024-11-20 16:48:30.346757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:45.722 [2024-11-20 16:48:30.346763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:45.722 [2024-11-20 16:48:30.346770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:45.722 [2024-11-20 16:48:30.346777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:45.722 [2024-11-20 16:48:30.346784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:45.722 [2024-11-20 16:48:30.346791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:45.722 [2024-11-20 16:48:30.346798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:45.722 [2024-11-20 16:48:30.346805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:45.722 [2024-11-20 16:48:30.346812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:45.722 [2024-11-20 16:48:30.346824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:45.722 [2024-11-20 16:48:30.346834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:45.722 [2024-11-20 16:48:30.346841] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:45.722 [2024-11-20 16:48:30.346851] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:45.722 [2024-11-20 16:48:30.346859] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:45.722 [2024-11-20 16:48:30.346866] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:45.722 [2024-11-20 16:48:30.346873] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:45.722 [2024-11-20 16:48:30.346880] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:45.722 [2024-11-20 16:48:30.346887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.722 [2024-11-20 16:48:30.346894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:45.722 [2024-11-20 16:48:30.346901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.639 ms 00:21:45.722 [2024-11-20 16:48:30.346908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.722 [2024-11-20 16:48:30.373806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.722 [2024-11-20 16:48:30.373856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:45.722 [2024-11-20 16:48:30.373867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.843 ms 00:21:45.722 [2024-11-20 16:48:30.373875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.722 [2024-11-20 16:48:30.373965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.722 [2024-11-20 16:48:30.373973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:45.722 [2024-11-20 16:48:30.373981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:21:45.722 [2024-11-20 16:48:30.373989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.722 [2024-11-20 16:48:30.422567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.722 [2024-11-20 16:48:30.422777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:45.722 [2024-11-20 16:48:30.422797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.518 ms 00:21:45.723 [2024-11-20 16:48:30.422806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.723 [2024-11-20 16:48:30.422861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.723 [2024-11-20 16:48:30.422870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:45.723 [2024-11-20 16:48:30.422880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:45.723 [2024-11-20 16:48:30.422892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.723 [2024-11-20 16:48:30.423283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.723 [2024-11-20 16:48:30.423299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:45.723 [2024-11-20 16:48:30.423308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:21:45.723 [2024-11-20 16:48:30.423315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.723 [2024-11-20 16:48:30.423484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.723 [2024-11-20 16:48:30.423496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:45.723 [2024-11-20 16:48:30.423505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:21:45.723 [2024-11-20 16:48:30.423517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.723 [2024-11-20 16:48:30.436912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.723 [2024-11-20 16:48:30.436948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:45.723 [2024-11-20 16:48:30.436962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.376 ms 00:21:45.723 [2024-11-20 16:48:30.436969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.723 [2024-11-20 16:48:30.449628] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:45.723 [2024-11-20 16:48:30.449795] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:45.723 [2024-11-20 16:48:30.449816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.723 [2024-11-20 16:48:30.449827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:45.723 [2024-11-20 16:48:30.449837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.740 ms 00:21:45.723 [2024-11-20 16:48:30.449845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.723 [2024-11-20 16:48:30.474907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.723 [2024-11-20 16:48:30.474964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:45.723 [2024-11-20 16:48:30.474985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.020 ms 00:21:45.723 [2024-11-20 16:48:30.474993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.723 [2024-11-20 16:48:30.487202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.723 [2024-11-20 16:48:30.487255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:45.723 [2024-11-20 16:48:30.487267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.143 ms 00:21:45.723 [2024-11-20 16:48:30.487274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.723 [2024-11-20 16:48:30.498859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.723 [2024-11-20 16:48:30.498898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:45.723 [2024-11-20 16:48:30.498909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.545 ms 00:21:45.723 [2024-11-20 16:48:30.498916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.723 [2024-11-20 16:48:30.499553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.723 [2024-11-20 16:48:30.499579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:45.723 [2024-11-20 16:48:30.499588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:21:45.723 [2024-11-20 16:48:30.499596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.723 [2024-11-20 16:48:30.555122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.723 [2024-11-20 16:48:30.555179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:45.723 [2024-11-20 16:48:30.555192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.503 ms 00:21:45.723 [2024-11-20 16:48:30.555205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.723 [2024-11-20 16:48:30.566145] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:45.723 [2024-11-20 16:48:30.568873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.723 [2024-11-20 16:48:30.569003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:45.723 [2024-11-20 16:48:30.569020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.612 ms 00:21:45.723 [2024-11-20 16:48:30.569029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.723 [2024-11-20 16:48:30.569135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.723 [2024-11-20 16:48:30.569149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:45.723 [2024-11-20 16:48:30.569163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:45.723 [2024-11-20 16:48:30.569170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.723 [2024-11-20 16:48:30.569239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.723 [2024-11-20 16:48:30.569249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:45.723 [2024-11-20 16:48:30.569257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:45.723 [2024-11-20 16:48:30.569265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.723 [2024-11-20 16:48:30.569283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.723 [2024-11-20 16:48:30.569291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:45.723 [2024-11-20 16:48:30.569299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:45.723 [2024-11-20 16:48:30.569306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.723 [2024-11-20 16:48:30.569335] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:45.723 [2024-11-20 16:48:30.569345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.723 [2024-11-20 16:48:30.569354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:45.723 [2024-11-20 16:48:30.569362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:45.723 [2024-11-20 16:48:30.569370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.723 [2024-11-20 16:48:30.592862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.723 [2024-11-20 16:48:30.592904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:45.723 [2024-11-20 16:48:30.592915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.456 ms 00:21:45.723 [2024-11-20 16:48:30.592924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.723 [2024-11-20 16:48:30.593000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.723 [2024-11-20 16:48:30.593010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:45.723 [2024-11-20 16:48:30.593018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:45.723 [2024-11-20 16:48:30.593026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.723 [2024-11-20 16:48:30.593957] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 272.325 ms, result 0 00:21:47.097  [2024-11-20T16:48:32.608Z] Copying: 45/1024 [MB] (45 MBps) [2024-11-20T16:48:33.980Z] Copying: 91/1024 [MB] (46 MBps) [2024-11-20T16:48:34.911Z] Copying: 136/1024 [MB] (45 MBps) [2024-11-20T16:48:35.963Z] Copying: 179/1024 [MB] (43 MBps) [2024-11-20T16:48:36.899Z] Copying: 224/1024 [MB] (44 MBps) [2024-11-20T16:48:37.834Z] Copying: 270/1024 [MB] (45 MBps) [2024-11-20T16:48:38.770Z] Copying: 316/1024 [MB] (45 MBps) [2024-11-20T16:48:39.705Z] Copying: 360/1024 [MB] (43 MBps) [2024-11-20T16:48:40.638Z] Copying: 403/1024 [MB] (43 MBps) [2024-11-20T16:48:42.044Z] Copying: 447/1024 [MB] (44 MBps) [2024-11-20T16:48:42.610Z] Copying: 493/1024 [MB] (45 MBps) [2024-11-20T16:48:43.983Z] Copying: 537/1024 [MB] (43 MBps) [2024-11-20T16:48:44.610Z] Copying: 582/1024 [MB] (44 MBps) [2024-11-20T16:48:45.981Z] Copying: 626/1024 [MB] (44 MBps) [2024-11-20T16:48:46.914Z] Copying: 671/1024 [MB] (44 MBps) [2024-11-20T16:48:47.847Z] Copying: 714/1024 [MB] (43 MBps) [2024-11-20T16:48:48.782Z] Copying: 758/1024 [MB] (43 MBps) [2024-11-20T16:48:49.721Z] Copying: 804/1024 [MB] (46 MBps) [2024-11-20T16:48:50.650Z] Copying: 847/1024 [MB] (42 MBps) [2024-11-20T16:48:52.022Z] Copying: 889/1024 [MB] (41 MBps) [2024-11-20T16:48:52.955Z] Copying: 933/1024 [MB] (44 MBps) [2024-11-20T16:48:53.945Z] Copying: 978/1024 [MB] (44 MBps) [2024-11-20T16:48:53.945Z] Copying: 1023/1024 [MB] (44 MBps) [2024-11-20T16:48:53.945Z] Copying: 1024/1024 [MB] (average 44 MBps)[2024-11-20 16:48:53.626121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.059 [2024-11-20 16:48:53.626173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:09.059 [2024-11-20 16:48:53.626187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:09.059 [2024-11-20 16:48:53.626195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.059 [2024-11-20 16:48:53.626216] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:09.059 [2024-11-20 16:48:53.628896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.059 [2024-11-20 16:48:53.628937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:09.059 [2024-11-20 16:48:53.628948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.664 ms 00:22:09.059 [2024-11-20 16:48:53.628956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.059 [2024-11-20 16:48:53.630427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.059 [2024-11-20 16:48:53.630458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:09.059 [2024-11-20 16:48:53.630468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.440 ms 00:22:09.059 [2024-11-20 16:48:53.630476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.059 [2024-11-20 16:48:53.642877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.059 [2024-11-20 16:48:53.642933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:09.059 [2024-11-20 16:48:53.642945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.383 ms 00:22:09.059 [2024-11-20 16:48:53.642954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.059 [2024-11-20 16:48:53.649142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.059 [2024-11-20 16:48:53.649205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:09.059 [2024-11-20 16:48:53.649216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.151 ms 00:22:09.059 [2024-11-20 16:48:53.649224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.059 [2024-11-20 16:48:53.673177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.059 [2024-11-20 16:48:53.673226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:09.059 [2024-11-20 16:48:53.673238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.887 ms 00:22:09.059 [2024-11-20 16:48:53.673246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.059 [2024-11-20 16:48:53.687584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.059 [2024-11-20 16:48:53.687803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:09.059 [2024-11-20 16:48:53.687824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.296 ms 00:22:09.059 [2024-11-20 16:48:53.687834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.059 [2024-11-20 16:48:53.687981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.059 [2024-11-20 16:48:53.687992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:09.059 [2024-11-20 16:48:53.688007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:22:09.059 [2024-11-20 16:48:53.688015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.059 [2024-11-20 16:48:53.711350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.059 [2024-11-20 16:48:53.711543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:09.059 [2024-11-20 16:48:53.711561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.318 ms 00:22:09.059 [2024-11-20 16:48:53.711569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.059 [2024-11-20 16:48:53.734014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.059 [2024-11-20 16:48:53.734055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:09.059 [2024-11-20 16:48:53.734076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.407 ms 00:22:09.059 [2024-11-20 16:48:53.734084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.059 [2024-11-20 16:48:53.757322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.059 [2024-11-20 16:48:53.757389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:09.059 [2024-11-20 16:48:53.757401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.196 ms 00:22:09.059 [2024-11-20 16:48:53.757409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.059 [2024-11-20 16:48:53.780604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.059 [2024-11-20 16:48:53.780784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:09.059 [2024-11-20 16:48:53.780802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.106 ms 00:22:09.059 [2024-11-20 16:48:53.780809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.059 [2024-11-20 16:48:53.780847] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:09.059 [2024-11-20 16:48:53.780861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.780871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.780880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.780887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.780895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.780903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.780911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.780919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.780927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.780934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.780942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.780949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.780956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.780964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.780971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.780978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.780986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.780994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.781001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.781009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.781016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.781023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.781030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.781038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.781045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.781052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.781060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.781089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.781096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.781104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.781112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.781120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.781127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.781136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.781144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.781151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.781159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.781166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.781174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:09.059 [2024-11-20 16:48:53.781182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:09.060 [2024-11-20 16:48:53.781670] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:09.060 [2024-11-20 16:48:53.781683] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ebc7f13a-85a4-4ff7-8e91-c07828f0f9c4 00:22:09.060 [2024-11-20 16:48:53.781691] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:09.060 [2024-11-20 16:48:53.781701] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:09.060 [2024-11-20 16:48:53.781709] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:09.060 [2024-11-20 16:48:53.781716] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:09.060 [2024-11-20 16:48:53.781723] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:09.060 [2024-11-20 16:48:53.781731] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:09.060 [2024-11-20 16:48:53.781739] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:09.060 [2024-11-20 16:48:53.781752] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:09.060 [2024-11-20 16:48:53.781759] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:09.060 [2024-11-20 16:48:53.781766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.060 [2024-11-20 16:48:53.781773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:09.060 [2024-11-20 16:48:53.781782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.920 ms 00:22:09.060 [2024-11-20 16:48:53.781789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.060 [2024-11-20 16:48:53.794406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.060 [2024-11-20 16:48:53.794447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:09.060 [2024-11-20 16:48:53.794459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.597 ms 00:22:09.060 [2024-11-20 16:48:53.794467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.060 [2024-11-20 16:48:53.794834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.060 [2024-11-20 16:48:53.794853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:09.060 [2024-11-20 16:48:53.794861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:22:09.060 [2024-11-20 16:48:53.794869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.060 [2024-11-20 16:48:53.827358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.060 [2024-11-20 16:48:53.827408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:09.060 [2024-11-20 16:48:53.827418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.060 [2024-11-20 16:48:53.827439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.060 [2024-11-20 16:48:53.827499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.060 [2024-11-20 16:48:53.827508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:09.060 [2024-11-20 16:48:53.827515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.060 [2024-11-20 16:48:53.827523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.061 [2024-11-20 16:48:53.827582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.061 [2024-11-20 16:48:53.827592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:09.061 [2024-11-20 16:48:53.827600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.061 [2024-11-20 16:48:53.827607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.061 [2024-11-20 16:48:53.827621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.061 [2024-11-20 16:48:53.827629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:09.061 [2024-11-20 16:48:53.827637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.061 [2024-11-20 16:48:53.827645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.061 [2024-11-20 16:48:53.904474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.061 [2024-11-20 16:48:53.904540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:09.061 [2024-11-20 16:48:53.904552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.061 [2024-11-20 16:48:53.904560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.321 [2024-11-20 16:48:53.967916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.321 [2024-11-20 16:48:53.967975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:09.321 [2024-11-20 16:48:53.967988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.321 [2024-11-20 16:48:53.967996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.321 [2024-11-20 16:48:53.968083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.321 [2024-11-20 16:48:53.968096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:09.321 [2024-11-20 16:48:53.968104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.321 [2024-11-20 16:48:53.968111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.321 [2024-11-20 16:48:53.968144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.321 [2024-11-20 16:48:53.968153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:09.321 [2024-11-20 16:48:53.968160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.321 [2024-11-20 16:48:53.968167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.321 [2024-11-20 16:48:53.968260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.321 [2024-11-20 16:48:53.968273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:09.321 [2024-11-20 16:48:53.968282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.321 [2024-11-20 16:48:53.968290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.321 [2024-11-20 16:48:53.968318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.321 [2024-11-20 16:48:53.968326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:09.321 [2024-11-20 16:48:53.968334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.321 [2024-11-20 16:48:53.968342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.321 [2024-11-20 16:48:53.968375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.321 [2024-11-20 16:48:53.968408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:09.321 [2024-11-20 16:48:53.968418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.321 [2024-11-20 16:48:53.968426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.321 [2024-11-20 16:48:53.968465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.321 [2024-11-20 16:48:53.968475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:09.321 [2024-11-20 16:48:53.968483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.321 [2024-11-20 16:48:53.968490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.321 [2024-11-20 16:48:53.968599] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 342.451 ms, result 0 00:22:11.222 00:22:11.222 00:22:11.222 16:48:55 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:22:11.222 [2024-11-20 16:48:55.982077] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:22:11.222 [2024-11-20 16:48:55.982430] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77400 ] 00:22:11.480 [2024-11-20 16:48:56.140825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.480 [2024-11-20 16:48:56.243306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.738 [2024-11-20 16:48:56.499821] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:11.738 [2024-11-20 16:48:56.499882] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:11.997 [2024-11-20 16:48:56.654945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.997 [2024-11-20 16:48:56.655000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:11.997 [2024-11-20 16:48:56.655018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:11.997 [2024-11-20 16:48:56.655028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.997 [2024-11-20 16:48:56.655079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.997 [2024-11-20 16:48:56.655089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:11.997 [2024-11-20 16:48:56.655099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:11.997 [2024-11-20 16:48:56.655107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.997 [2024-11-20 16:48:56.655126] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:11.997 [2024-11-20 16:48:56.655909] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:11.997 [2024-11-20 16:48:56.655930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.997 [2024-11-20 16:48:56.655938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:11.997 [2024-11-20 16:48:56.655946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.808 ms 00:22:11.997 [2024-11-20 16:48:56.655953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.997 [2024-11-20 16:48:56.657051] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:11.997 [2024-11-20 16:48:56.669302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.997 [2024-11-20 16:48:56.669352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:11.997 [2024-11-20 16:48:56.669365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.251 ms 00:22:11.997 [2024-11-20 16:48:56.669373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.997 [2024-11-20 16:48:56.669463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.997 [2024-11-20 16:48:56.669473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:11.997 [2024-11-20 16:48:56.669482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:22:11.997 [2024-11-20 16:48:56.669489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.997 [2024-11-20 16:48:56.674967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.997 [2024-11-20 16:48:56.675006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:11.997 [2024-11-20 16:48:56.675017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.398 ms 00:22:11.997 [2024-11-20 16:48:56.675025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.997 [2024-11-20 16:48:56.675108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.997 [2024-11-20 16:48:56.675117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:11.997 [2024-11-20 16:48:56.675125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:22:11.997 [2024-11-20 16:48:56.675132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.997 [2024-11-20 16:48:56.675183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.997 [2024-11-20 16:48:56.675193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:11.997 [2024-11-20 16:48:56.675201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:11.997 [2024-11-20 16:48:56.675208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.997 [2024-11-20 16:48:56.675230] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:11.997 [2024-11-20 16:48:56.678788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.997 [2024-11-20 16:48:56.678820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:11.997 [2024-11-20 16:48:56.678830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.564 ms 00:22:11.997 [2024-11-20 16:48:56.678840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.997 [2024-11-20 16:48:56.678875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.997 [2024-11-20 16:48:56.678883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:11.997 [2024-11-20 16:48:56.678891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:11.997 [2024-11-20 16:48:56.678899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.997 [2024-11-20 16:48:56.678921] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:11.997 [2024-11-20 16:48:56.678940] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:11.997 [2024-11-20 16:48:56.678976] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:11.997 [2024-11-20 16:48:56.678993] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:11.997 [2024-11-20 16:48:56.679106] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:11.997 [2024-11-20 16:48:56.679116] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:11.997 [2024-11-20 16:48:56.679126] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:11.997 [2024-11-20 16:48:56.679136] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:11.997 [2024-11-20 16:48:56.679145] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:11.997 [2024-11-20 16:48:56.679153] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:11.997 [2024-11-20 16:48:56.679160] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:11.997 [2024-11-20 16:48:56.679168] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:11.997 [2024-11-20 16:48:56.679175] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:11.997 [2024-11-20 16:48:56.679185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.997 [2024-11-20 16:48:56.679192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:11.997 [2024-11-20 16:48:56.679200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:22:11.997 [2024-11-20 16:48:56.679208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.997 [2024-11-20 16:48:56.679290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.997 [2024-11-20 16:48:56.679298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:11.997 [2024-11-20 16:48:56.679305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:11.997 [2024-11-20 16:48:56.679311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.997 [2024-11-20 16:48:56.679442] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:11.997 [2024-11-20 16:48:56.679456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:11.997 [2024-11-20 16:48:56.679465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:11.997 [2024-11-20 16:48:56.679472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.997 [2024-11-20 16:48:56.679480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:11.997 [2024-11-20 16:48:56.679486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:11.997 [2024-11-20 16:48:56.679493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:11.997 [2024-11-20 16:48:56.679501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:11.997 [2024-11-20 16:48:56.679509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:11.997 [2024-11-20 16:48:56.679515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:11.997 [2024-11-20 16:48:56.679522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:11.997 [2024-11-20 16:48:56.679529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:11.997 [2024-11-20 16:48:56.679535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:11.997 [2024-11-20 16:48:56.679541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:11.997 [2024-11-20 16:48:56.679547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:11.997 [2024-11-20 16:48:56.679560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.998 [2024-11-20 16:48:56.679567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:11.998 [2024-11-20 16:48:56.679573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:11.998 [2024-11-20 16:48:56.679579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.998 [2024-11-20 16:48:56.679587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:11.998 [2024-11-20 16:48:56.679595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:11.998 [2024-11-20 16:48:56.679601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:11.998 [2024-11-20 16:48:56.679608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:11.998 [2024-11-20 16:48:56.679614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:11.998 [2024-11-20 16:48:56.679620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:11.998 [2024-11-20 16:48:56.679627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:11.998 [2024-11-20 16:48:56.679633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:11.998 [2024-11-20 16:48:56.679639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:11.998 [2024-11-20 16:48:56.679645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:11.998 [2024-11-20 16:48:56.679651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:11.998 [2024-11-20 16:48:56.679658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:11.998 [2024-11-20 16:48:56.679664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:11.998 [2024-11-20 16:48:56.679671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:11.998 [2024-11-20 16:48:56.679677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:11.998 [2024-11-20 16:48:56.679683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:11.998 [2024-11-20 16:48:56.679689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:11.998 [2024-11-20 16:48:56.679696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:11.998 [2024-11-20 16:48:56.679703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:11.998 [2024-11-20 16:48:56.679709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:11.998 [2024-11-20 16:48:56.679715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.998 [2024-11-20 16:48:56.679722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:11.998 [2024-11-20 16:48:56.679728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:11.998 [2024-11-20 16:48:56.679734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.998 [2024-11-20 16:48:56.679741] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:11.998 [2024-11-20 16:48:56.679748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:11.998 [2024-11-20 16:48:56.679755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:11.998 [2024-11-20 16:48:56.679762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.998 [2024-11-20 16:48:56.679769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:11.998 [2024-11-20 16:48:56.679776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:11.998 [2024-11-20 16:48:56.679782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:11.998 [2024-11-20 16:48:56.679789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:11.998 [2024-11-20 16:48:56.679795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:11.998 [2024-11-20 16:48:56.679802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:11.998 [2024-11-20 16:48:56.679809] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:11.998 [2024-11-20 16:48:56.679818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:11.998 [2024-11-20 16:48:56.679826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:11.998 [2024-11-20 16:48:56.679833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:11.998 [2024-11-20 16:48:56.679840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:11.998 [2024-11-20 16:48:56.679847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:11.998 [2024-11-20 16:48:56.679854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:11.998 [2024-11-20 16:48:56.679861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:11.998 [2024-11-20 16:48:56.679867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:11.998 [2024-11-20 16:48:56.679874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:11.998 [2024-11-20 16:48:56.679881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:11.998 [2024-11-20 16:48:56.679888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:11.998 [2024-11-20 16:48:56.679894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:11.998 [2024-11-20 16:48:56.679901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:11.998 [2024-11-20 16:48:56.679907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:11.998 [2024-11-20 16:48:56.679915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:11.998 [2024-11-20 16:48:56.679921] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:11.998 [2024-11-20 16:48:56.679931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:11.998 [2024-11-20 16:48:56.679940] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:11.998 [2024-11-20 16:48:56.679948] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:11.998 [2024-11-20 16:48:56.679955] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:11.998 [2024-11-20 16:48:56.679962] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:11.998 [2024-11-20 16:48:56.679969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.998 [2024-11-20 16:48:56.679976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:11.998 [2024-11-20 16:48:56.679983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.606 ms 00:22:11.998 [2024-11-20 16:48:56.679990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.998 [2024-11-20 16:48:56.706259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.998 [2024-11-20 16:48:56.706308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:11.998 [2024-11-20 16:48:56.706321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.217 ms 00:22:11.998 [2024-11-20 16:48:56.706329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.998 [2024-11-20 16:48:56.706440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.998 [2024-11-20 16:48:56.706450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:11.998 [2024-11-20 16:48:56.706458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:22:11.998 [2024-11-20 16:48:56.706465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.998 [2024-11-20 16:48:56.753033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.998 [2024-11-20 16:48:56.753087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:11.998 [2024-11-20 16:48:56.753100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.500 ms 00:22:11.998 [2024-11-20 16:48:56.753110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.998 [2024-11-20 16:48:56.753166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.998 [2024-11-20 16:48:56.753176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:11.998 [2024-11-20 16:48:56.753185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:11.998 [2024-11-20 16:48:56.753195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.998 [2024-11-20 16:48:56.753597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.998 [2024-11-20 16:48:56.753614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:11.998 [2024-11-20 16:48:56.753624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:22:11.998 [2024-11-20 16:48:56.753632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.998 [2024-11-20 16:48:56.753761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.998 [2024-11-20 16:48:56.753778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:11.998 [2024-11-20 16:48:56.753786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:22:11.998 [2024-11-20 16:48:56.753798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.998 [2024-11-20 16:48:56.766927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.998 [2024-11-20 16:48:56.766967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:11.998 [2024-11-20 16:48:56.766982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.110 ms 00:22:11.998 [2024-11-20 16:48:56.766989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.998 [2024-11-20 16:48:56.779590] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:11.998 [2024-11-20 16:48:56.779637] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:11.998 [2024-11-20 16:48:56.779649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.998 [2024-11-20 16:48:56.779657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:11.998 [2024-11-20 16:48:56.779669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.545 ms 00:22:11.998 [2024-11-20 16:48:56.779676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.998 [2024-11-20 16:48:56.804264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.998 [2024-11-20 16:48:56.804330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:11.998 [2024-11-20 16:48:56.804343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.527 ms 00:22:11.998 [2024-11-20 16:48:56.804351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.998 [2024-11-20 16:48:56.816429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.998 [2024-11-20 16:48:56.816491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:11.998 [2024-11-20 16:48:56.816503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.003 ms 00:22:11.998 [2024-11-20 16:48:56.816510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.998 [2024-11-20 16:48:56.828444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.998 [2024-11-20 16:48:56.828488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:11.998 [2024-11-20 16:48:56.828500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.877 ms 00:22:11.998 [2024-11-20 16:48:56.828508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.998 [2024-11-20 16:48:56.829174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.998 [2024-11-20 16:48:56.829196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:11.998 [2024-11-20 16:48:56.829205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:22:11.998 [2024-11-20 16:48:56.829215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.255 [2024-11-20 16:48:56.885339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.255 [2024-11-20 16:48:56.885415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:12.255 [2024-11-20 16:48:56.885437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.104 ms 00:22:12.255 [2024-11-20 16:48:56.885446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.255 [2024-11-20 16:48:56.896565] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:12.256 [2024-11-20 16:48:56.899525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.256 [2024-11-20 16:48:56.899557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:12.256 [2024-11-20 16:48:56.899570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.013 ms 00:22:12.256 [2024-11-20 16:48:56.899579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.256 [2024-11-20 16:48:56.899689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.256 [2024-11-20 16:48:56.899699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:12.256 [2024-11-20 16:48:56.899708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:12.256 [2024-11-20 16:48:56.899718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.256 [2024-11-20 16:48:56.899783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.256 [2024-11-20 16:48:56.899798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:12.256 [2024-11-20 16:48:56.899807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:22:12.256 [2024-11-20 16:48:56.899814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.256 [2024-11-20 16:48:56.899833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.256 [2024-11-20 16:48:56.899841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:12.256 [2024-11-20 16:48:56.899849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:12.256 [2024-11-20 16:48:56.899855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.256 [2024-11-20 16:48:56.899887] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:12.256 [2024-11-20 16:48:56.899899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.256 [2024-11-20 16:48:56.899907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:12.256 [2024-11-20 16:48:56.899914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:12.256 [2024-11-20 16:48:56.899921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.256 [2024-11-20 16:48:56.924540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.256 [2024-11-20 16:48:56.924591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:12.256 [2024-11-20 16:48:56.924605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.600 ms 00:22:12.256 [2024-11-20 16:48:56.924618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.256 [2024-11-20 16:48:56.924730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.256 [2024-11-20 16:48:56.924741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:12.256 [2024-11-20 16:48:56.924749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:12.256 [2024-11-20 16:48:56.924756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.256 [2024-11-20 16:48:56.925774] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 270.389 ms, result 0 00:22:13.630  [2024-11-20T16:48:59.449Z] Copying: 47/1024 [MB] (47 MBps) [2024-11-20T16:49:00.383Z] Copying: 95/1024 [MB] (48 MBps) [2024-11-20T16:49:01.316Z] Copying: 143/1024 [MB] (47 MBps) [2024-11-20T16:49:02.249Z] Copying: 189/1024 [MB] (46 MBps) [2024-11-20T16:49:03.182Z] Copying: 234/1024 [MB] (45 MBps) [2024-11-20T16:49:04.117Z] Copying: 280/1024 [MB] (45 MBps) [2024-11-20T16:49:05.491Z] Copying: 326/1024 [MB] (46 MBps) [2024-11-20T16:49:06.426Z] Copying: 374/1024 [MB] (47 MBps) [2024-11-20T16:49:07.360Z] Copying: 422/1024 [MB] (48 MBps) [2024-11-20T16:49:08.301Z] Copying: 471/1024 [MB] (48 MBps) [2024-11-20T16:49:09.233Z] Copying: 506/1024 [MB] (34 MBps) [2024-11-20T16:49:10.165Z] Copying: 548/1024 [MB] (42 MBps) [2024-11-20T16:49:11.545Z] Copying: 594/1024 [MB] (46 MBps) [2024-11-20T16:49:12.117Z] Copying: 628/1024 [MB] (33 MBps) [2024-11-20T16:49:13.493Z] Copying: 651/1024 [MB] (22 MBps) [2024-11-20T16:49:14.428Z] Copying: 677/1024 [MB] (26 MBps) [2024-11-20T16:49:15.373Z] Copying: 702/1024 [MB] (25 MBps) [2024-11-20T16:49:16.320Z] Copying: 741/1024 [MB] (38 MBps) [2024-11-20T16:49:17.263Z] Copying: 756/1024 [MB] (15 MBps) [2024-11-20T16:49:18.206Z] Copying: 773/1024 [MB] (16 MBps) [2024-11-20T16:49:19.146Z] Copying: 784/1024 [MB] (11 MBps) [2024-11-20T16:49:20.533Z] Copying: 794/1024 [MB] (10 MBps) [2024-11-20T16:49:21.477Z] Copying: 805/1024 [MB] (10 MBps) [2024-11-20T16:49:22.417Z] Copying: 815/1024 [MB] (10 MBps) [2024-11-20T16:49:23.362Z] Copying: 826/1024 [MB] (10 MBps) [2024-11-20T16:49:24.301Z] Copying: 836/1024 [MB] (10 MBps) [2024-11-20T16:49:25.244Z] Copying: 847/1024 [MB] (10 MBps) [2024-11-20T16:49:26.183Z] Copying: 858/1024 [MB] (11 MBps) [2024-11-20T16:49:27.142Z] Copying: 869/1024 [MB] (11 MBps) [2024-11-20T16:49:28.114Z] Copying: 880/1024 [MB] (10 MBps) [2024-11-20T16:49:29.499Z] Copying: 890/1024 [MB] (10 MBps) [2024-11-20T16:49:30.442Z] Copying: 901/1024 [MB] (10 MBps) [2024-11-20T16:49:31.384Z] Copying: 912/1024 [MB] (10 MBps) [2024-11-20T16:49:32.327Z] Copying: 923/1024 [MB] (10 MBps) [2024-11-20T16:49:33.270Z] Copying: 934/1024 [MB] (10 MBps) [2024-11-20T16:49:34.213Z] Copying: 944/1024 [MB] (10 MBps) [2024-11-20T16:49:35.157Z] Copying: 955/1024 [MB] (10 MBps) [2024-11-20T16:49:36.162Z] Copying: 965/1024 [MB] (10 MBps) [2024-11-20T16:49:37.107Z] Copying: 976/1024 [MB] (10 MBps) [2024-11-20T16:49:38.494Z] Copying: 986/1024 [MB] (10 MBps) [2024-11-20T16:49:39.436Z] Copying: 997/1024 [MB] (10 MBps) [2024-11-20T16:49:40.380Z] Copying: 1007/1024 [MB] (10 MBps) [2024-11-20T16:49:40.954Z] Copying: 1017/1024 [MB] (10 MBps) [2024-11-20T16:49:40.954Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-20 16:49:40.759746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.068 [2024-11-20 16:49:40.760007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:56.068 [2024-11-20 16:49:40.760102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:56.068 [2024-11-20 16:49:40.760141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.068 [2024-11-20 16:49:40.760199] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:56.068 [2024-11-20 16:49:40.764458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.068 [2024-11-20 16:49:40.764615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:56.068 [2024-11-20 16:49:40.764703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.197 ms 00:22:56.068 [2024-11-20 16:49:40.764738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.068 [2024-11-20 16:49:40.765088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.068 [2024-11-20 16:49:40.765178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:56.068 [2024-11-20 16:49:40.765249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:22:56.068 [2024-11-20 16:49:40.765285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.068 [2024-11-20 16:49:40.769349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.068 [2024-11-20 16:49:40.769434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:56.068 [2024-11-20 16:49:40.769483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.025 ms 00:22:56.068 [2024-11-20 16:49:40.769505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.068 [2024-11-20 16:49:40.775652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.068 [2024-11-20 16:49:40.775742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:56.068 [2024-11-20 16:49:40.775792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.113 ms 00:22:56.068 [2024-11-20 16:49:40.775802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.068 [2024-11-20 16:49:40.800731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.068 [2024-11-20 16:49:40.800764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:56.068 [2024-11-20 16:49:40.800776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.869 ms 00:22:56.068 [2024-11-20 16:49:40.800785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.068 [2024-11-20 16:49:40.814545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.068 [2024-11-20 16:49:40.814574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:56.068 [2024-11-20 16:49:40.814586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.726 ms 00:22:56.068 [2024-11-20 16:49:40.814600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.068 [2024-11-20 16:49:40.814724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.068 [2024-11-20 16:49:40.814738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:56.068 [2024-11-20 16:49:40.814746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:22:56.068 [2024-11-20 16:49:40.814753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.068 [2024-11-20 16:49:40.838321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.068 [2024-11-20 16:49:40.838352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:56.068 [2024-11-20 16:49:40.838362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.553 ms 00:22:56.068 [2024-11-20 16:49:40.838371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.068 [2024-11-20 16:49:40.861621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.068 [2024-11-20 16:49:40.861659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:56.068 [2024-11-20 16:49:40.861669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.212 ms 00:22:56.068 [2024-11-20 16:49:40.861677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.068 [2024-11-20 16:49:40.883919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.068 [2024-11-20 16:49:40.883948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:56.068 [2024-11-20 16:49:40.883958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.211 ms 00:22:56.068 [2024-11-20 16:49:40.883967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.068 [2024-11-20 16:49:40.906767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.068 [2024-11-20 16:49:40.906794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:56.068 [2024-11-20 16:49:40.906804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.744 ms 00:22:56.068 [2024-11-20 16:49:40.906812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.068 [2024-11-20 16:49:40.906843] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:56.068 [2024-11-20 16:49:40.906857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.906871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.906880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.906888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.906895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.906903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.906911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.906918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.906926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.906933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.906941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.906948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.906955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.906963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.906971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.906978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.906985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.906993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.907000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.907007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.907014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.907022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.907029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.907036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.907044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.907052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.907059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.907066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.907075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.907082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.907090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.907097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.907104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.907111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.907118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.907126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:56.068 [2024-11-20 16:49:40.907133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:56.069 [2024-11-20 16:49:40.907625] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:56.069 [2024-11-20 16:49:40.907636] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ebc7f13a-85a4-4ff7-8e91-c07828f0f9c4 00:22:56.069 [2024-11-20 16:49:40.907643] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:56.069 [2024-11-20 16:49:40.907651] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:56.069 [2024-11-20 16:49:40.907658] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:56.069 [2024-11-20 16:49:40.907665] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:56.069 [2024-11-20 16:49:40.907672] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:56.069 [2024-11-20 16:49:40.907680] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:56.069 [2024-11-20 16:49:40.907693] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:56.069 [2024-11-20 16:49:40.907699] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:56.069 [2024-11-20 16:49:40.907706] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:56.069 [2024-11-20 16:49:40.907713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.069 [2024-11-20 16:49:40.907720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:56.069 [2024-11-20 16:49:40.907728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.871 ms 00:22:56.069 [2024-11-20 16:49:40.907735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.069 [2024-11-20 16:49:40.919771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.069 [2024-11-20 16:49:40.919795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:56.069 [2024-11-20 16:49:40.919806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.018 ms 00:22:56.069 [2024-11-20 16:49:40.919815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.069 [2024-11-20 16:49:40.920161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.069 [2024-11-20 16:49:40.920169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:56.069 [2024-11-20 16:49:40.920177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:22:56.069 [2024-11-20 16:49:40.920188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.331 [2024-11-20 16:49:40.952771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.331 [2024-11-20 16:49:40.952801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:56.331 [2024-11-20 16:49:40.952811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.331 [2024-11-20 16:49:40.952819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.331 [2024-11-20 16:49:40.952875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.331 [2024-11-20 16:49:40.952884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:56.331 [2024-11-20 16:49:40.952893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.331 [2024-11-20 16:49:40.952905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.331 [2024-11-20 16:49:40.952958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.331 [2024-11-20 16:49:40.952968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:56.331 [2024-11-20 16:49:40.952976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.331 [2024-11-20 16:49:40.952984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.331 [2024-11-20 16:49:40.952999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.331 [2024-11-20 16:49:40.953008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:56.331 [2024-11-20 16:49:40.953016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.331 [2024-11-20 16:49:40.953024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.331 [2024-11-20 16:49:41.028447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.331 [2024-11-20 16:49:41.028487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:56.331 [2024-11-20 16:49:41.028499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.331 [2024-11-20 16:49:41.028506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.331 [2024-11-20 16:49:41.090430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.331 [2024-11-20 16:49:41.090469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:56.331 [2024-11-20 16:49:41.090480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.331 [2024-11-20 16:49:41.090488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.331 [2024-11-20 16:49:41.090555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.331 [2024-11-20 16:49:41.090564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:56.331 [2024-11-20 16:49:41.090572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.331 [2024-11-20 16:49:41.090579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.331 [2024-11-20 16:49:41.090610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.332 [2024-11-20 16:49:41.090618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:56.332 [2024-11-20 16:49:41.090626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.332 [2024-11-20 16:49:41.090633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.332 [2024-11-20 16:49:41.090721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.332 [2024-11-20 16:49:41.090731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:56.332 [2024-11-20 16:49:41.090738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.332 [2024-11-20 16:49:41.090745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.332 [2024-11-20 16:49:41.090771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.332 [2024-11-20 16:49:41.090780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:56.332 [2024-11-20 16:49:41.090787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.332 [2024-11-20 16:49:41.090794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.332 [2024-11-20 16:49:41.090827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.332 [2024-11-20 16:49:41.090837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:56.332 [2024-11-20 16:49:41.090844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.332 [2024-11-20 16:49:41.090851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.332 [2024-11-20 16:49:41.090887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.332 [2024-11-20 16:49:41.090895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:56.332 [2024-11-20 16:49:41.090903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.332 [2024-11-20 16:49:41.090910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.332 [2024-11-20 16:49:41.091019] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 331.256 ms, result 0 00:22:56.907 00:22:56.907 00:22:56.907 16:49:41 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:59.454 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:59.454 16:49:43 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:22:59.454 [2024-11-20 16:49:43.969328] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:22:59.454 [2024-11-20 16:49:43.969465] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77898 ] 00:22:59.454 [2024-11-20 16:49:44.129314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.454 [2024-11-20 16:49:44.230817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.714 [2024-11-20 16:49:44.483582] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:59.714 [2024-11-20 16:49:44.483649] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:59.975 [2024-11-20 16:49:44.641606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.975 [2024-11-20 16:49:44.641651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:59.975 [2024-11-20 16:49:44.641669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:59.975 [2024-11-20 16:49:44.641677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.975 [2024-11-20 16:49:44.641725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.975 [2024-11-20 16:49:44.641736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:59.975 [2024-11-20 16:49:44.641746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:59.975 [2024-11-20 16:49:44.641753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.975 [2024-11-20 16:49:44.641773] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:59.975 [2024-11-20 16:49:44.642574] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:59.975 [2024-11-20 16:49:44.642603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.975 [2024-11-20 16:49:44.642611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:59.975 [2024-11-20 16:49:44.642620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.835 ms 00:22:59.975 [2024-11-20 16:49:44.642627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.975 [2024-11-20 16:49:44.643742] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:59.975 [2024-11-20 16:49:44.656396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.975 [2024-11-20 16:49:44.656441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:59.975 [2024-11-20 16:49:44.656453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.655 ms 00:22:59.975 [2024-11-20 16:49:44.656460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.975 [2024-11-20 16:49:44.656515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.975 [2024-11-20 16:49:44.656524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:59.975 [2024-11-20 16:49:44.656533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:22:59.975 [2024-11-20 16:49:44.656540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.975 [2024-11-20 16:49:44.661501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.975 [2024-11-20 16:49:44.661528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:59.975 [2024-11-20 16:49:44.661537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.915 ms 00:22:59.975 [2024-11-20 16:49:44.661544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.975 [2024-11-20 16:49:44.661614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.975 [2024-11-20 16:49:44.661622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:59.975 [2024-11-20 16:49:44.661630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:59.975 [2024-11-20 16:49:44.661637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.975 [2024-11-20 16:49:44.661686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.975 [2024-11-20 16:49:44.661695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:59.975 [2024-11-20 16:49:44.661703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:59.975 [2024-11-20 16:49:44.661711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.975 [2024-11-20 16:49:44.661732] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:59.975 [2024-11-20 16:49:44.665074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.975 [2024-11-20 16:49:44.665102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:59.975 [2024-11-20 16:49:44.665112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.347 ms 00:22:59.975 [2024-11-20 16:49:44.665121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.975 [2024-11-20 16:49:44.665148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.975 [2024-11-20 16:49:44.665156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:59.975 [2024-11-20 16:49:44.665165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:59.975 [2024-11-20 16:49:44.665172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.975 [2024-11-20 16:49:44.665190] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:59.975 [2024-11-20 16:49:44.665207] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:59.975 [2024-11-20 16:49:44.665239] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:59.975 [2024-11-20 16:49:44.665256] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:59.975 [2024-11-20 16:49:44.665358] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:59.975 [2024-11-20 16:49:44.665368] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:59.975 [2024-11-20 16:49:44.665387] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:59.975 [2024-11-20 16:49:44.665397] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:59.975 [2024-11-20 16:49:44.665406] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:59.975 [2024-11-20 16:49:44.665414] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:59.975 [2024-11-20 16:49:44.665422] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:59.975 [2024-11-20 16:49:44.665429] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:59.976 [2024-11-20 16:49:44.665435] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:59.976 [2024-11-20 16:49:44.665446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.976 [2024-11-20 16:49:44.665453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:59.976 [2024-11-20 16:49:44.665460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:22:59.976 [2024-11-20 16:49:44.665467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.976 [2024-11-20 16:49:44.665549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.976 [2024-11-20 16:49:44.665557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:59.976 [2024-11-20 16:49:44.665564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:59.976 [2024-11-20 16:49:44.665571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.976 [2024-11-20 16:49:44.665681] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:59.976 [2024-11-20 16:49:44.665693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:59.976 [2024-11-20 16:49:44.665702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:59.976 [2024-11-20 16:49:44.665710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.976 [2024-11-20 16:49:44.665718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:59.976 [2024-11-20 16:49:44.665725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:59.976 [2024-11-20 16:49:44.665732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:59.976 [2024-11-20 16:49:44.665738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:59.976 [2024-11-20 16:49:44.665745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:59.976 [2024-11-20 16:49:44.665752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:59.976 [2024-11-20 16:49:44.665759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:59.976 [2024-11-20 16:49:44.665766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:59.976 [2024-11-20 16:49:44.665772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:59.976 [2024-11-20 16:49:44.665781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:59.976 [2024-11-20 16:49:44.665788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:59.976 [2024-11-20 16:49:44.665800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.976 [2024-11-20 16:49:44.665807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:59.976 [2024-11-20 16:49:44.665813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:59.976 [2024-11-20 16:49:44.665820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.976 [2024-11-20 16:49:44.665826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:59.976 [2024-11-20 16:49:44.665833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:59.976 [2024-11-20 16:49:44.665839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:59.976 [2024-11-20 16:49:44.665846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:59.976 [2024-11-20 16:49:44.665852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:59.976 [2024-11-20 16:49:44.665858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:59.976 [2024-11-20 16:49:44.665865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:59.976 [2024-11-20 16:49:44.665871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:59.976 [2024-11-20 16:49:44.665877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:59.976 [2024-11-20 16:49:44.665884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:59.976 [2024-11-20 16:49:44.665890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:59.976 [2024-11-20 16:49:44.665897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:59.976 [2024-11-20 16:49:44.665903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:59.976 [2024-11-20 16:49:44.665910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:59.976 [2024-11-20 16:49:44.665916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:59.976 [2024-11-20 16:49:44.665922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:59.976 [2024-11-20 16:49:44.665929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:59.976 [2024-11-20 16:49:44.665935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:59.976 [2024-11-20 16:49:44.665941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:59.976 [2024-11-20 16:49:44.665948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:59.976 [2024-11-20 16:49:44.665954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.976 [2024-11-20 16:49:44.665960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:59.976 [2024-11-20 16:49:44.665967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:59.976 [2024-11-20 16:49:44.665973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.976 [2024-11-20 16:49:44.665979] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:59.976 [2024-11-20 16:49:44.665987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:59.976 [2024-11-20 16:49:44.665995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:59.976 [2024-11-20 16:49:44.666002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.976 [2024-11-20 16:49:44.666009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:59.976 [2024-11-20 16:49:44.666016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:59.976 [2024-11-20 16:49:44.666023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:59.976 [2024-11-20 16:49:44.666030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:59.976 [2024-11-20 16:49:44.666037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:59.976 [2024-11-20 16:49:44.666044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:59.976 [2024-11-20 16:49:44.666051] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:59.976 [2024-11-20 16:49:44.666060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:59.976 [2024-11-20 16:49:44.666068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:59.976 [2024-11-20 16:49:44.666076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:59.976 [2024-11-20 16:49:44.666083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:59.976 [2024-11-20 16:49:44.666090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:59.976 [2024-11-20 16:49:44.666097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:59.976 [2024-11-20 16:49:44.666104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:59.976 [2024-11-20 16:49:44.666111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:59.976 [2024-11-20 16:49:44.666118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:59.976 [2024-11-20 16:49:44.666125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:59.976 [2024-11-20 16:49:44.666132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:59.976 [2024-11-20 16:49:44.666138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:59.976 [2024-11-20 16:49:44.666145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:59.976 [2024-11-20 16:49:44.666152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:59.976 [2024-11-20 16:49:44.666159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:59.976 [2024-11-20 16:49:44.666166] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:59.976 [2024-11-20 16:49:44.666176] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:59.976 [2024-11-20 16:49:44.666184] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:59.976 [2024-11-20 16:49:44.666191] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:59.976 [2024-11-20 16:49:44.666198] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:59.976 [2024-11-20 16:49:44.666205] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:59.976 [2024-11-20 16:49:44.666212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.976 [2024-11-20 16:49:44.666220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:59.976 [2024-11-20 16:49:44.666229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.600 ms 00:22:59.976 [2024-11-20 16:49:44.666236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.976 [2024-11-20 16:49:44.691992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.976 [2024-11-20 16:49:44.692024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:59.976 [2024-11-20 16:49:44.692034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.717 ms 00:22:59.976 [2024-11-20 16:49:44.692042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.976 [2024-11-20 16:49:44.692125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.976 [2024-11-20 16:49:44.692132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:59.976 [2024-11-20 16:49:44.692140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:22:59.976 [2024-11-20 16:49:44.692147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.976 [2024-11-20 16:49:44.735874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.977 [2024-11-20 16:49:44.735917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:59.977 [2024-11-20 16:49:44.735930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.678 ms 00:22:59.977 [2024-11-20 16:49:44.735944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.977 [2024-11-20 16:49:44.735986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.977 [2024-11-20 16:49:44.735996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:59.977 [2024-11-20 16:49:44.736004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:59.977 [2024-11-20 16:49:44.736014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.977 [2024-11-20 16:49:44.736401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.977 [2024-11-20 16:49:44.736424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:59.977 [2024-11-20 16:49:44.736433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:22:59.977 [2024-11-20 16:49:44.736440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.977 [2024-11-20 16:49:44.736561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.977 [2024-11-20 16:49:44.736570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:59.977 [2024-11-20 16:49:44.736579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:22:59.977 [2024-11-20 16:49:44.736589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.977 [2024-11-20 16:49:44.749568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.977 [2024-11-20 16:49:44.749601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:59.977 [2024-11-20 16:49:44.749613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.959 ms 00:22:59.977 [2024-11-20 16:49:44.749621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.977 [2024-11-20 16:49:44.762435] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:59.977 [2024-11-20 16:49:44.762485] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:59.977 [2024-11-20 16:49:44.762496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.977 [2024-11-20 16:49:44.762504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:59.977 [2024-11-20 16:49:44.762513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.786 ms 00:22:59.977 [2024-11-20 16:49:44.762520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.977 [2024-11-20 16:49:44.786922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.977 [2024-11-20 16:49:44.786962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:59.977 [2024-11-20 16:49:44.786973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.346 ms 00:22:59.977 [2024-11-20 16:49:44.786982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.977 [2024-11-20 16:49:44.798538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.977 [2024-11-20 16:49:44.798571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:59.977 [2024-11-20 16:49:44.798581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.517 ms 00:22:59.977 [2024-11-20 16:49:44.798588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.977 [2024-11-20 16:49:44.810325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.977 [2024-11-20 16:49:44.810357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:59.977 [2024-11-20 16:49:44.810368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.705 ms 00:22:59.977 [2024-11-20 16:49:44.810376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.977 [2024-11-20 16:49:44.810989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.977 [2024-11-20 16:49:44.811012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:59.977 [2024-11-20 16:49:44.811021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:22:59.977 [2024-11-20 16:49:44.811031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.240 [2024-11-20 16:49:44.866808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.240 [2024-11-20 16:49:44.866868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:00.240 [2024-11-20 16:49:44.866888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.759 ms 00:23:00.240 [2024-11-20 16:49:44.866896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.240 [2024-11-20 16:49:44.877511] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:00.240 [2024-11-20 16:49:44.880075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.240 [2024-11-20 16:49:44.880107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:00.240 [2024-11-20 16:49:44.880119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.112 ms 00:23:00.240 [2024-11-20 16:49:44.880128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.240 [2024-11-20 16:49:44.880233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.240 [2024-11-20 16:49:44.880244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:00.240 [2024-11-20 16:49:44.880253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:00.240 [2024-11-20 16:49:44.880263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.240 [2024-11-20 16:49:44.880330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.240 [2024-11-20 16:49:44.880341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:00.240 [2024-11-20 16:49:44.880349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:00.240 [2024-11-20 16:49:44.880356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.240 [2024-11-20 16:49:44.880375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.240 [2024-11-20 16:49:44.880396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:00.240 [2024-11-20 16:49:44.880411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:00.240 [2024-11-20 16:49:44.880418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.240 [2024-11-20 16:49:44.880447] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:00.240 [2024-11-20 16:49:44.880459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.240 [2024-11-20 16:49:44.880466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:00.240 [2024-11-20 16:49:44.880474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:00.240 [2024-11-20 16:49:44.880481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.240 [2024-11-20 16:49:44.903789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.240 [2024-11-20 16:49:44.903824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:00.240 [2024-11-20 16:49:44.903835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.292 ms 00:23:00.240 [2024-11-20 16:49:44.903846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.240 [2024-11-20 16:49:44.903912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.240 [2024-11-20 16:49:44.903921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:00.240 [2024-11-20 16:49:44.903929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:00.240 [2024-11-20 16:49:44.903937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.240 [2024-11-20 16:49:44.905191] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 263.179 ms, result 0 00:23:01.176  [2024-11-20T16:49:46.995Z] Copying: 28/1024 [MB] (28 MBps) [2024-11-20T16:49:47.956Z] Copying: 73/1024 [MB] (44 MBps) [2024-11-20T16:49:49.329Z] Copying: 120/1024 [MB] (46 MBps) [2024-11-20T16:49:50.262Z] Copying: 165/1024 [MB] (45 MBps) [2024-11-20T16:49:51.193Z] Copying: 207/1024 [MB] (42 MBps) [2024-11-20T16:49:52.149Z] Copying: 254/1024 [MB] (46 MBps) [2024-11-20T16:49:53.083Z] Copying: 299/1024 [MB] (45 MBps) [2024-11-20T16:49:54.015Z] Copying: 342/1024 [MB] (42 MBps) [2024-11-20T16:49:54.948Z] Copying: 387/1024 [MB] (45 MBps) [2024-11-20T16:49:56.317Z] Copying: 433/1024 [MB] (45 MBps) [2024-11-20T16:49:57.250Z] Copying: 480/1024 [MB] (46 MBps) [2024-11-20T16:49:58.182Z] Copying: 527/1024 [MB] (46 MBps) [2024-11-20T16:49:59.116Z] Copying: 574/1024 [MB] (46 MBps) [2024-11-20T16:50:00.049Z] Copying: 619/1024 [MB] (45 MBps) [2024-11-20T16:50:01.067Z] Copying: 666/1024 [MB] (47 MBps) [2024-11-20T16:50:01.996Z] Copying: 712/1024 [MB] (46 MBps) [2024-11-20T16:50:02.930Z] Copying: 759/1024 [MB] (47 MBps) [2024-11-20T16:50:04.304Z] Copying: 806/1024 [MB] (46 MBps) [2024-11-20T16:50:05.237Z] Copying: 851/1024 [MB] (45 MBps) [2024-11-20T16:50:06.169Z] Copying: 896/1024 [MB] (44 MBps) [2024-11-20T16:50:07.102Z] Copying: 942/1024 [MB] (46 MBps) [2024-11-20T16:50:08.042Z] Copying: 989/1024 [MB] (46 MBps) [2024-11-20T16:50:08.609Z] Copying: 1023/1024 [MB] (33 MBps) [2024-11-20T16:50:08.609Z] Copying: 1024/1024 [MB] (average 43 MBps)[2024-11-20 16:50:08.569710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.723 [2024-11-20 16:50:08.569868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:23.723 [2024-11-20 16:50:08.569889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:23.723 [2024-11-20 16:50:08.569906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.723 [2024-11-20 16:50:08.571252] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:23.723 [2024-11-20 16:50:08.576976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.723 [2024-11-20 16:50:08.577006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:23.723 [2024-11-20 16:50:08.577017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.691 ms 00:23:23.723 [2024-11-20 16:50:08.577026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.723 [2024-11-20 16:50:08.589171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.723 [2024-11-20 16:50:08.589204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:23.723 [2024-11-20 16:50:08.589215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.938 ms 00:23:23.723 [2024-11-20 16:50:08.589224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.723 [2024-11-20 16:50:08.604939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.723 [2024-11-20 16:50:08.604970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:23.723 [2024-11-20 16:50:08.604981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.696 ms 00:23:23.723 [2024-11-20 16:50:08.604989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.982 [2024-11-20 16:50:08.611146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.982 [2024-11-20 16:50:08.611168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:23.982 [2024-11-20 16:50:08.611179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.132 ms 00:23:23.982 [2024-11-20 16:50:08.611187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.982 [2024-11-20 16:50:08.634173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.982 [2024-11-20 16:50:08.634204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:23.982 [2024-11-20 16:50:08.634215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.944 ms 00:23:23.982 [2024-11-20 16:50:08.634224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.982 [2024-11-20 16:50:08.647538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.982 [2024-11-20 16:50:08.647570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:23.982 [2024-11-20 16:50:08.647581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.282 ms 00:23:23.982 [2024-11-20 16:50:08.647590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.982 [2024-11-20 16:50:08.688907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.982 [2024-11-20 16:50:08.688938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:23.982 [2024-11-20 16:50:08.688948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.281 ms 00:23:23.982 [2024-11-20 16:50:08.688956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.982 [2024-11-20 16:50:08.712000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.982 [2024-11-20 16:50:08.712034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:23.982 [2024-11-20 16:50:08.712046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.030 ms 00:23:23.982 [2024-11-20 16:50:08.712054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.982 [2024-11-20 16:50:08.734690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.982 [2024-11-20 16:50:08.734725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:23.982 [2024-11-20 16:50:08.734735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.604 ms 00:23:23.982 [2024-11-20 16:50:08.734742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.982 [2024-11-20 16:50:08.756637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.982 [2024-11-20 16:50:08.756664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:23.982 [2024-11-20 16:50:08.756673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.863 ms 00:23:23.982 [2024-11-20 16:50:08.756681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.982 [2024-11-20 16:50:08.778615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.982 [2024-11-20 16:50:08.778644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:23.982 [2024-11-20 16:50:08.778654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.882 ms 00:23:23.982 [2024-11-20 16:50:08.778661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.982 [2024-11-20 16:50:08.778692] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:23.982 [2024-11-20 16:50:08.778707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 103936 / 261120 wr_cnt: 1 state: open 00:23:23.982 [2024-11-20 16:50:08.778717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:23.982 [2024-11-20 16:50:08.778725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:23.982 [2024-11-20 16:50:08.778733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:23.982 [2024-11-20 16:50:08.778741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:23.982 [2024-11-20 16:50:08.778748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:23.982 [2024-11-20 16:50:08.778756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:23.982 [2024-11-20 16:50:08.778763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:23.982 [2024-11-20 16:50:08.778770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:23.982 [2024-11-20 16:50:08.778778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:23.982 [2024-11-20 16:50:08.778785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:23.982 [2024-11-20 16:50:08.778793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:23.982 [2024-11-20 16:50:08.778800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:23.982 [2024-11-20 16:50:08.778808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:23.982 [2024-11-20 16:50:08.778815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:23.982 [2024-11-20 16:50:08.778822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:23.982 [2024-11-20 16:50:08.778830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:23.982 [2024-11-20 16:50:08.778837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:23.982 [2024-11-20 16:50:08.778844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:23.982 [2024-11-20 16:50:08.778851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:23.982 [2024-11-20 16:50:08.778859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:23.982 [2024-11-20 16:50:08.778866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.778873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.778880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.778887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.778894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.778903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.778911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.778918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.778926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.778933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.778940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.778948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.778955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.778962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.778970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.778977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.778985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.778992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.778999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:23.983 [2024-11-20 16:50:08.779368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:23.984 [2024-11-20 16:50:08.779375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:23.984 [2024-11-20 16:50:08.779393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:23.984 [2024-11-20 16:50:08.779400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:23.984 [2024-11-20 16:50:08.779408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:23.984 [2024-11-20 16:50:08.779416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:23.984 [2024-11-20 16:50:08.779423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:23.984 [2024-11-20 16:50:08.779430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:23.984 [2024-11-20 16:50:08.779438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:23.984 [2024-11-20 16:50:08.779445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:23.984 [2024-11-20 16:50:08.779452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:23.984 [2024-11-20 16:50:08.779460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:23.984 [2024-11-20 16:50:08.779476] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:23.984 [2024-11-20 16:50:08.779484] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ebc7f13a-85a4-4ff7-8e91-c07828f0f9c4 00:23:23.984 [2024-11-20 16:50:08.779492] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 103936 00:23:23.984 [2024-11-20 16:50:08.779499] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 104896 00:23:23.984 [2024-11-20 16:50:08.779506] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 103936 00:23:23.984 [2024-11-20 16:50:08.779514] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0092 00:23:23.984 [2024-11-20 16:50:08.779522] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:23.984 [2024-11-20 16:50:08.779533] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:23.984 [2024-11-20 16:50:08.779546] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:23.984 [2024-11-20 16:50:08.779553] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:23.984 [2024-11-20 16:50:08.779559] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:23.984 [2024-11-20 16:50:08.779567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.984 [2024-11-20 16:50:08.779574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:23.984 [2024-11-20 16:50:08.779582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.875 ms 00:23:23.984 [2024-11-20 16:50:08.779589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.984 [2024-11-20 16:50:08.791974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.984 [2024-11-20 16:50:08.792000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:23.984 [2024-11-20 16:50:08.792010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.370 ms 00:23:23.984 [2024-11-20 16:50:08.792022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.984 [2024-11-20 16:50:08.792361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.984 [2024-11-20 16:50:08.792398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:23.984 [2024-11-20 16:50:08.792408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:23:23.984 [2024-11-20 16:50:08.792415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.984 [2024-11-20 16:50:08.824923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.984 [2024-11-20 16:50:08.824966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:23.984 [2024-11-20 16:50:08.824981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.984 [2024-11-20 16:50:08.824988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.984 [2024-11-20 16:50:08.825051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.984 [2024-11-20 16:50:08.825059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:23.984 [2024-11-20 16:50:08.825067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.984 [2024-11-20 16:50:08.825074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.984 [2024-11-20 16:50:08.825133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.984 [2024-11-20 16:50:08.825143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:23.984 [2024-11-20 16:50:08.825150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.984 [2024-11-20 16:50:08.825161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.984 [2024-11-20 16:50:08.825175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.984 [2024-11-20 16:50:08.825182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:23.984 [2024-11-20 16:50:08.825189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.984 [2024-11-20 16:50:08.825196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.242 [2024-11-20 16:50:08.901258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.242 [2024-11-20 16:50:08.901296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:24.242 [2024-11-20 16:50:08.901311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.242 [2024-11-20 16:50:08.901318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.242 [2024-11-20 16:50:08.963262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.242 [2024-11-20 16:50:08.963299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:24.242 [2024-11-20 16:50:08.963309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.242 [2024-11-20 16:50:08.963317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.242 [2024-11-20 16:50:08.963394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.242 [2024-11-20 16:50:08.963404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:24.242 [2024-11-20 16:50:08.963411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.242 [2024-11-20 16:50:08.963418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.242 [2024-11-20 16:50:08.963458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.242 [2024-11-20 16:50:08.963466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:24.242 [2024-11-20 16:50:08.963473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.242 [2024-11-20 16:50:08.963480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.242 [2024-11-20 16:50:08.963561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.242 [2024-11-20 16:50:08.963575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:24.242 [2024-11-20 16:50:08.963582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.242 [2024-11-20 16:50:08.963590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.242 [2024-11-20 16:50:08.963620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.242 [2024-11-20 16:50:08.963629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:24.242 [2024-11-20 16:50:08.963636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.242 [2024-11-20 16:50:08.963644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.242 [2024-11-20 16:50:08.963676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.242 [2024-11-20 16:50:08.963684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:24.242 [2024-11-20 16:50:08.963691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.242 [2024-11-20 16:50:08.963698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.242 [2024-11-20 16:50:08.963739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.242 [2024-11-20 16:50:08.963749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:24.243 [2024-11-20 16:50:08.963756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.243 [2024-11-20 16:50:08.963763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.243 [2024-11-20 16:50:08.963869] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 395.025 ms, result 0 00:23:26.215 00:23:26.215 00:23:26.215 16:50:10 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:23:26.215 [2024-11-20 16:50:10.804576] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:23:26.215 [2024-11-20 16:50:10.804688] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78173 ] 00:23:26.215 [2024-11-20 16:50:10.963116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.215 [2024-11-20 16:50:11.063071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.473 [2024-11-20 16:50:11.317360] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:26.473 [2024-11-20 16:50:11.317424] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:26.733 [2024-11-20 16:50:11.470132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.733 [2024-11-20 16:50:11.470184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:26.733 [2024-11-20 16:50:11.470202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:26.733 [2024-11-20 16:50:11.470210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.733 [2024-11-20 16:50:11.470254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.733 [2024-11-20 16:50:11.470264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:26.733 [2024-11-20 16:50:11.470274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:26.733 [2024-11-20 16:50:11.470281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.733 [2024-11-20 16:50:11.470300] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:26.733 [2024-11-20 16:50:11.471013] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:26.733 [2024-11-20 16:50:11.471034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.733 [2024-11-20 16:50:11.471042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:26.733 [2024-11-20 16:50:11.471050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.738 ms 00:23:26.733 [2024-11-20 16:50:11.471058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.733 [2024-11-20 16:50:11.472205] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:26.733 [2024-11-20 16:50:11.484351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.733 [2024-11-20 16:50:11.484392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:26.733 [2024-11-20 16:50:11.484404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.148 ms 00:23:26.733 [2024-11-20 16:50:11.484413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.733 [2024-11-20 16:50:11.484467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.733 [2024-11-20 16:50:11.484476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:26.733 [2024-11-20 16:50:11.484484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:26.733 [2024-11-20 16:50:11.484491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.733 [2024-11-20 16:50:11.489516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.733 [2024-11-20 16:50:11.489542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:26.733 [2024-11-20 16:50:11.489551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.967 ms 00:23:26.733 [2024-11-20 16:50:11.489559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.733 [2024-11-20 16:50:11.489626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.733 [2024-11-20 16:50:11.489634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:26.733 [2024-11-20 16:50:11.489642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:23:26.733 [2024-11-20 16:50:11.489649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.733 [2024-11-20 16:50:11.489691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.733 [2024-11-20 16:50:11.489699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:26.733 [2024-11-20 16:50:11.489707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:26.733 [2024-11-20 16:50:11.489714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.733 [2024-11-20 16:50:11.489735] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:26.733 [2024-11-20 16:50:11.493018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.733 [2024-11-20 16:50:11.493041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:26.733 [2024-11-20 16:50:11.493050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.288 ms 00:23:26.733 [2024-11-20 16:50:11.493059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.733 [2024-11-20 16:50:11.493086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.733 [2024-11-20 16:50:11.493094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:26.733 [2024-11-20 16:50:11.493102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:26.733 [2024-11-20 16:50:11.493109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.733 [2024-11-20 16:50:11.493127] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:26.733 [2024-11-20 16:50:11.493144] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:26.733 [2024-11-20 16:50:11.493176] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:26.733 [2024-11-20 16:50:11.493193] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:26.733 [2024-11-20 16:50:11.493295] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:26.733 [2024-11-20 16:50:11.493305] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:26.733 [2024-11-20 16:50:11.493315] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:26.733 [2024-11-20 16:50:11.493324] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:26.733 [2024-11-20 16:50:11.493333] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:26.733 [2024-11-20 16:50:11.493342] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:26.733 [2024-11-20 16:50:11.493349] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:26.733 [2024-11-20 16:50:11.493356] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:26.733 [2024-11-20 16:50:11.493363] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:26.733 [2024-11-20 16:50:11.493372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.733 [2024-11-20 16:50:11.493391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:26.733 [2024-11-20 16:50:11.493398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:23:26.733 [2024-11-20 16:50:11.493405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.733 [2024-11-20 16:50:11.493486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.733 [2024-11-20 16:50:11.493494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:26.733 [2024-11-20 16:50:11.493502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:26.733 [2024-11-20 16:50:11.493509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.733 [2024-11-20 16:50:11.493609] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:26.733 [2024-11-20 16:50:11.493620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:26.733 [2024-11-20 16:50:11.493629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:26.733 [2024-11-20 16:50:11.493636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:26.733 [2024-11-20 16:50:11.493644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:26.733 [2024-11-20 16:50:11.493651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:26.733 [2024-11-20 16:50:11.493658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:26.733 [2024-11-20 16:50:11.493666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:26.733 [2024-11-20 16:50:11.493673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:26.733 [2024-11-20 16:50:11.493680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:26.733 [2024-11-20 16:50:11.493687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:26.733 [2024-11-20 16:50:11.493693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:26.733 [2024-11-20 16:50:11.493699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:26.733 [2024-11-20 16:50:11.493706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:26.733 [2024-11-20 16:50:11.493713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:26.733 [2024-11-20 16:50:11.493724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:26.733 [2024-11-20 16:50:11.493732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:26.733 [2024-11-20 16:50:11.493738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:26.733 [2024-11-20 16:50:11.493745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:26.733 [2024-11-20 16:50:11.493752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:26.733 [2024-11-20 16:50:11.493758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:26.733 [2024-11-20 16:50:11.493765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:26.733 [2024-11-20 16:50:11.493772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:26.733 [2024-11-20 16:50:11.493778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:26.733 [2024-11-20 16:50:11.493785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:26.733 [2024-11-20 16:50:11.493791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:26.733 [2024-11-20 16:50:11.493798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:26.733 [2024-11-20 16:50:11.493804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:26.733 [2024-11-20 16:50:11.493810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:26.733 [2024-11-20 16:50:11.493817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:26.733 [2024-11-20 16:50:11.493824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:26.733 [2024-11-20 16:50:11.493830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:26.733 [2024-11-20 16:50:11.493837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:26.733 [2024-11-20 16:50:11.493844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:26.734 [2024-11-20 16:50:11.493850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:26.734 [2024-11-20 16:50:11.493856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:26.734 [2024-11-20 16:50:11.493863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:26.734 [2024-11-20 16:50:11.493869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:26.734 [2024-11-20 16:50:11.493876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:26.734 [2024-11-20 16:50:11.493882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:26.734 [2024-11-20 16:50:11.493889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:26.734 [2024-11-20 16:50:11.493895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:26.734 [2024-11-20 16:50:11.493902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:26.734 [2024-11-20 16:50:11.493909] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:26.734 [2024-11-20 16:50:11.493916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:26.734 [2024-11-20 16:50:11.493923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:26.734 [2024-11-20 16:50:11.493930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:26.734 [2024-11-20 16:50:11.493937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:26.734 [2024-11-20 16:50:11.493945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:26.734 [2024-11-20 16:50:11.493951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:26.734 [2024-11-20 16:50:11.493958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:26.734 [2024-11-20 16:50:11.493964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:26.734 [2024-11-20 16:50:11.493971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:26.734 [2024-11-20 16:50:11.493979] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:26.734 [2024-11-20 16:50:11.493987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:26.734 [2024-11-20 16:50:11.493996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:26.734 [2024-11-20 16:50:11.494003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:26.734 [2024-11-20 16:50:11.494009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:26.734 [2024-11-20 16:50:11.494017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:26.734 [2024-11-20 16:50:11.494023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:26.734 [2024-11-20 16:50:11.494030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:26.734 [2024-11-20 16:50:11.494037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:26.734 [2024-11-20 16:50:11.494044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:26.734 [2024-11-20 16:50:11.494051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:26.734 [2024-11-20 16:50:11.494058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:26.734 [2024-11-20 16:50:11.494065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:26.734 [2024-11-20 16:50:11.494073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:26.734 [2024-11-20 16:50:11.494080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:26.734 [2024-11-20 16:50:11.494087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:26.734 [2024-11-20 16:50:11.494094] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:26.734 [2024-11-20 16:50:11.494104] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:26.734 [2024-11-20 16:50:11.494112] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:26.734 [2024-11-20 16:50:11.494120] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:26.734 [2024-11-20 16:50:11.494127] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:26.734 [2024-11-20 16:50:11.494134] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:26.734 [2024-11-20 16:50:11.494141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.734 [2024-11-20 16:50:11.494148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:26.734 [2024-11-20 16:50:11.494156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.600 ms 00:23:26.734 [2024-11-20 16:50:11.494163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.734 [2024-11-20 16:50:11.520043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.734 [2024-11-20 16:50:11.520072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:26.734 [2024-11-20 16:50:11.520082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.828 ms 00:23:26.734 [2024-11-20 16:50:11.520090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.734 [2024-11-20 16:50:11.520170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.734 [2024-11-20 16:50:11.520178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:26.734 [2024-11-20 16:50:11.520186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:23:26.734 [2024-11-20 16:50:11.520193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.734 [2024-11-20 16:50:11.561802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.734 [2024-11-20 16:50:11.561838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:26.734 [2024-11-20 16:50:11.561850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.561 ms 00:23:26.734 [2024-11-20 16:50:11.561857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.734 [2024-11-20 16:50:11.561897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.734 [2024-11-20 16:50:11.561907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:26.734 [2024-11-20 16:50:11.561915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:26.734 [2024-11-20 16:50:11.561925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.734 [2024-11-20 16:50:11.562278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.734 [2024-11-20 16:50:11.562301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:26.734 [2024-11-20 16:50:11.562311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:23:26.734 [2024-11-20 16:50:11.562318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.734 [2024-11-20 16:50:11.562451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.734 [2024-11-20 16:50:11.562460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:26.734 [2024-11-20 16:50:11.562468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:23:26.734 [2024-11-20 16:50:11.562475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.734 [2024-11-20 16:50:11.575601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.734 [2024-11-20 16:50:11.575627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:26.734 [2024-11-20 16:50:11.575639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.089 ms 00:23:26.734 [2024-11-20 16:50:11.575647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.734 [2024-11-20 16:50:11.587969] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:26.734 [2024-11-20 16:50:11.587999] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:26.734 [2024-11-20 16:50:11.588011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.734 [2024-11-20 16:50:11.588019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:26.734 [2024-11-20 16:50:11.588028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.278 ms 00:23:26.734 [2024-11-20 16:50:11.588035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.734 [2024-11-20 16:50:11.611979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.734 [2024-11-20 16:50:11.612015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:26.734 [2024-11-20 16:50:11.612026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.905 ms 00:23:26.734 [2024-11-20 16:50:11.612035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.992 [2024-11-20 16:50:11.623717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.992 [2024-11-20 16:50:11.623753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:26.992 [2024-11-20 16:50:11.623763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.645 ms 00:23:26.992 [2024-11-20 16:50:11.623770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.992 [2024-11-20 16:50:11.635162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.992 [2024-11-20 16:50:11.635188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:26.992 [2024-11-20 16:50:11.635198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.359 ms 00:23:26.992 [2024-11-20 16:50:11.635206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.992 [2024-11-20 16:50:11.635818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.992 [2024-11-20 16:50:11.635837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:26.992 [2024-11-20 16:50:11.635845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:23:26.992 [2024-11-20 16:50:11.635855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.992 [2024-11-20 16:50:11.690833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.992 [2024-11-20 16:50:11.690883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:26.992 [2024-11-20 16:50:11.690902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.960 ms 00:23:26.992 [2024-11-20 16:50:11.690911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.992 [2024-11-20 16:50:11.701197] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:26.992 [2024-11-20 16:50:11.703701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.992 [2024-11-20 16:50:11.703727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:26.992 [2024-11-20 16:50:11.703739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.743 ms 00:23:26.992 [2024-11-20 16:50:11.703749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.992 [2024-11-20 16:50:11.703838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.992 [2024-11-20 16:50:11.703848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:26.992 [2024-11-20 16:50:11.703857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:26.992 [2024-11-20 16:50:11.703866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.992 [2024-11-20 16:50:11.705134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.992 [2024-11-20 16:50:11.705163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:26.992 [2024-11-20 16:50:11.705174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.230 ms 00:23:26.992 [2024-11-20 16:50:11.705182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.992 [2024-11-20 16:50:11.705208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.992 [2024-11-20 16:50:11.705216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:26.992 [2024-11-20 16:50:11.705225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:26.992 [2024-11-20 16:50:11.705233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.992 [2024-11-20 16:50:11.705267] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:26.992 [2024-11-20 16:50:11.705279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.992 [2024-11-20 16:50:11.705287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:26.992 [2024-11-20 16:50:11.705296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:26.992 [2024-11-20 16:50:11.705305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.993 [2024-11-20 16:50:11.729629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.993 [2024-11-20 16:50:11.729666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:26.993 [2024-11-20 16:50:11.729678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.304 ms 00:23:26.993 [2024-11-20 16:50:11.729692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.993 [2024-11-20 16:50:11.729767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.993 [2024-11-20 16:50:11.729777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:26.993 [2024-11-20 16:50:11.729785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:23:26.993 [2024-11-20 16:50:11.729792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.993 [2024-11-20 16:50:11.730826] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 260.273 ms, result 0 00:23:28.366  [2024-11-20T16:50:14.186Z] Copying: 36/1024 [MB] (36 MBps) [2024-11-20T16:50:15.179Z] Copying: 85/1024 [MB] (48 MBps) [2024-11-20T16:50:16.113Z] Copying: 130/1024 [MB] (45 MBps) [2024-11-20T16:50:17.047Z] Copying: 179/1024 [MB] (49 MBps) [2024-11-20T16:50:17.979Z] Copying: 228/1024 [MB] (48 MBps) [2024-11-20T16:50:19.351Z] Copying: 276/1024 [MB] (48 MBps) [2024-11-20T16:50:19.917Z] Copying: 323/1024 [MB] (46 MBps) [2024-11-20T16:50:21.287Z] Copying: 371/1024 [MB] (47 MBps) [2024-11-20T16:50:22.221Z] Copying: 419/1024 [MB] (47 MBps) [2024-11-20T16:50:23.224Z] Copying: 466/1024 [MB] (47 MBps) [2024-11-20T16:50:24.156Z] Copying: 515/1024 [MB] (48 MBps) [2024-11-20T16:50:25.089Z] Copying: 562/1024 [MB] (46 MBps) [2024-11-20T16:50:26.021Z] Copying: 608/1024 [MB] (46 MBps) [2024-11-20T16:50:26.955Z] Copying: 656/1024 [MB] (47 MBps) [2024-11-20T16:50:28.327Z] Copying: 704/1024 [MB] (47 MBps) [2024-11-20T16:50:29.259Z] Copying: 750/1024 [MB] (46 MBps) [2024-11-20T16:50:30.276Z] Copying: 800/1024 [MB] (49 MBps) [2024-11-20T16:50:31.209Z] Copying: 849/1024 [MB] (49 MBps) [2024-11-20T16:50:32.141Z] Copying: 898/1024 [MB] (48 MBps) [2024-11-20T16:50:33.071Z] Copying: 945/1024 [MB] (46 MBps) [2024-11-20T16:50:33.637Z] Copying: 992/1024 [MB] (47 MBps) [2024-11-20T16:50:34.571Z] Copying: 1024/1024 [MB] (average 47 MBps)[2024-11-20 16:50:34.430322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.685 [2024-11-20 16:50:34.430398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:49.685 [2024-11-20 16:50:34.430413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:49.685 [2024-11-20 16:50:34.430421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.685 [2024-11-20 16:50:34.430452] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:49.685 [2024-11-20 16:50:34.433091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.685 [2024-11-20 16:50:34.433124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:49.685 [2024-11-20 16:50:34.433135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.623 ms 00:23:49.685 [2024-11-20 16:50:34.433144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.685 [2024-11-20 16:50:34.433364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.685 [2024-11-20 16:50:34.433388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:49.685 [2024-11-20 16:50:34.433397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:23:49.685 [2024-11-20 16:50:34.433405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.685 [2024-11-20 16:50:34.438397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.685 [2024-11-20 16:50:34.438433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:49.685 [2024-11-20 16:50:34.438443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.974 ms 00:23:49.685 [2024-11-20 16:50:34.438450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.685 [2024-11-20 16:50:34.445556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.685 [2024-11-20 16:50:34.445591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:49.685 [2024-11-20 16:50:34.445601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.051 ms 00:23:49.685 [2024-11-20 16:50:34.445608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.686 [2024-11-20 16:50:34.471299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.686 [2024-11-20 16:50:34.471350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:49.686 [2024-11-20 16:50:34.471362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.627 ms 00:23:49.686 [2024-11-20 16:50:34.471370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.686 [2024-11-20 16:50:34.500055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.686 [2024-11-20 16:50:34.500114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:49.686 [2024-11-20 16:50:34.500128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.564 ms 00:23:49.686 [2024-11-20 16:50:34.500136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.686 [2024-11-20 16:50:34.558850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.686 [2024-11-20 16:50:34.558915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:49.686 [2024-11-20 16:50:34.558930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.281 ms 00:23:49.686 [2024-11-20 16:50:34.558938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.946 [2024-11-20 16:50:34.582677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.946 [2024-11-20 16:50:34.582720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:49.946 [2024-11-20 16:50:34.582733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.724 ms 00:23:49.946 [2024-11-20 16:50:34.582741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.946 [2024-11-20 16:50:34.605126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.946 [2024-11-20 16:50:34.605167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:49.946 [2024-11-20 16:50:34.605187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.322 ms 00:23:49.946 [2024-11-20 16:50:34.605195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.946 [2024-11-20 16:50:34.627000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.946 [2024-11-20 16:50:34.627041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:49.946 [2024-11-20 16:50:34.627053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.722 ms 00:23:49.946 [2024-11-20 16:50:34.627060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.946 [2024-11-20 16:50:34.649156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.946 [2024-11-20 16:50:34.649195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:49.946 [2024-11-20 16:50:34.649206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.005 ms 00:23:49.946 [2024-11-20 16:50:34.649214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.946 [2024-11-20 16:50:34.649331] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:49.946 [2024-11-20 16:50:34.649360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:23:49.946 [2024-11-20 16:50:34.649371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:49.946 [2024-11-20 16:50:34.649392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:49.946 [2024-11-20 16:50:34.649401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:49.946 [2024-11-20 16:50:34.649410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:49.946 [2024-11-20 16:50:34.649418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:49.946 [2024-11-20 16:50:34.649425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:49.946 [2024-11-20 16:50:34.649433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:49.946 [2024-11-20 16:50:34.649441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:49.946 [2024-11-20 16:50:34.649449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:49.946 [2024-11-20 16:50:34.649457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:49.946 [2024-11-20 16:50:34.649465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:49.946 [2024-11-20 16:50:34.649473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:49.946 [2024-11-20 16:50:34.649480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:49.946 [2024-11-20 16:50:34.649488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:49.946 [2024-11-20 16:50:34.649496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:49.946 [2024-11-20 16:50:34.649503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:49.946 [2024-11-20 16:50:34.649511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:49.946 [2024-11-20 16:50:34.649518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:49.946 [2024-11-20 16:50:34.649526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:49.946 [2024-11-20 16:50:34.649533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:49.946 [2024-11-20 16:50:34.649541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:49.946 [2024-11-20 16:50:34.649548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:49.946 [2024-11-20 16:50:34.649555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:49.946 [2024-11-20 16:50:34.649563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:49.946 [2024-11-20 16:50:34.649570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:49.946 [2024-11-20 16:50:34.649577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:49.946 [2024-11-20 16:50:34.649585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:49.946 [2024-11-20 16:50:34.649592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:49.946 [2024-11-20 16:50:34.649600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:49.946 [2024-11-20 16:50:34.649607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.649998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.650005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.650013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.650021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.650028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.650036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.650043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.650050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.650057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.650065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.650072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.650080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.650087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.650095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.650102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.650109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.650117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.650124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.650131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:49.947 [2024-11-20 16:50:34.650146] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:49.947 [2024-11-20 16:50:34.650157] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ebc7f13a-85a4-4ff7-8e91-c07828f0f9c4 00:23:49.947 [2024-11-20 16:50:34.650165] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:23:49.947 [2024-11-20 16:50:34.650173] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 28096 00:23:49.947 [2024-11-20 16:50:34.650180] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 27136 00:23:49.947 [2024-11-20 16:50:34.650189] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0354 00:23:49.947 [2024-11-20 16:50:34.650196] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:49.947 [2024-11-20 16:50:34.650207] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:49.947 [2024-11-20 16:50:34.650214] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:49.947 [2024-11-20 16:50:34.650226] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:49.947 [2024-11-20 16:50:34.650233] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:49.947 [2024-11-20 16:50:34.650241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.947 [2024-11-20 16:50:34.650248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:49.947 [2024-11-20 16:50:34.650256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.912 ms 00:23:49.947 [2024-11-20 16:50:34.650263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.947 [2024-11-20 16:50:34.662457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.947 [2024-11-20 16:50:34.662488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:49.947 [2024-11-20 16:50:34.662499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.178 ms 00:23:49.947 [2024-11-20 16:50:34.662511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.947 [2024-11-20 16:50:34.662846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.947 [2024-11-20 16:50:34.662865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:49.947 [2024-11-20 16:50:34.662874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:23:49.947 [2024-11-20 16:50:34.662881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.947 [2024-11-20 16:50:34.695306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.947 [2024-11-20 16:50:34.695349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:49.948 [2024-11-20 16:50:34.695363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.948 [2024-11-20 16:50:34.695370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.948 [2024-11-20 16:50:34.695440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.948 [2024-11-20 16:50:34.695448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:49.948 [2024-11-20 16:50:34.695456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.948 [2024-11-20 16:50:34.695463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.948 [2024-11-20 16:50:34.695518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.948 [2024-11-20 16:50:34.695533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:49.948 [2024-11-20 16:50:34.695541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.948 [2024-11-20 16:50:34.695551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.948 [2024-11-20 16:50:34.695566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.948 [2024-11-20 16:50:34.695577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:49.948 [2024-11-20 16:50:34.695584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.948 [2024-11-20 16:50:34.695591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.948 [2024-11-20 16:50:34.772681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.948 [2024-11-20 16:50:34.772726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:49.948 [2024-11-20 16:50:34.772743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.948 [2024-11-20 16:50:34.772751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.206 [2024-11-20 16:50:34.835962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.206 [2024-11-20 16:50:34.836011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:50.206 [2024-11-20 16:50:34.836022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.206 [2024-11-20 16:50:34.836030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.206 [2024-11-20 16:50:34.836105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.206 [2024-11-20 16:50:34.836114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:50.206 [2024-11-20 16:50:34.836122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.206 [2024-11-20 16:50:34.836130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.206 [2024-11-20 16:50:34.836167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.206 [2024-11-20 16:50:34.836190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:50.206 [2024-11-20 16:50:34.836198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.206 [2024-11-20 16:50:34.836205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.206 [2024-11-20 16:50:34.836290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.206 [2024-11-20 16:50:34.836305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:50.206 [2024-11-20 16:50:34.836313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.206 [2024-11-20 16:50:34.836320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.206 [2024-11-20 16:50:34.836353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.206 [2024-11-20 16:50:34.836362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:50.206 [2024-11-20 16:50:34.836370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.206 [2024-11-20 16:50:34.836395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.206 [2024-11-20 16:50:34.836430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.206 [2024-11-20 16:50:34.836438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:50.206 [2024-11-20 16:50:34.836446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.206 [2024-11-20 16:50:34.836453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.206 [2024-11-20 16:50:34.836496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.206 [2024-11-20 16:50:34.836505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:50.206 [2024-11-20 16:50:34.836513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.206 [2024-11-20 16:50:34.836520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.206 [2024-11-20 16:50:34.836626] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 406.278 ms, result 0 00:23:50.772 00:23:50.772 00:23:50.772 16:50:35 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:53.306 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:53.306 16:50:37 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:23:53.306 16:50:37 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:23:53.306 16:50:37 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:53.306 16:50:37 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:53.306 16:50:37 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:53.306 16:50:37 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 76912 00:23:53.306 16:50:37 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 76912 ']' 00:23:53.306 16:50:37 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 76912 00:23:53.306 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76912) - No such process 00:23:53.306 Process with pid 76912 is not found 00:23:53.306 16:50:37 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 76912 is not found' 00:23:53.306 Remove shared memory files 00:23:53.306 16:50:37 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:23:53.306 16:50:37 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:53.306 16:50:37 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:23:53.306 16:50:37 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:23:53.306 16:50:37 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:23:53.306 16:50:37 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:53.306 16:50:37 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:23:53.306 00:23:53.306 real 2m32.000s 00:23:53.306 user 2m21.354s 00:23:53.306 sys 0m11.859s 00:23:53.306 16:50:37 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:53.306 16:50:37 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:23:53.306 ************************************ 00:23:53.306 END TEST ftl_restore 00:23:53.306 ************************************ 00:23:53.306 16:50:37 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:53.306 16:50:37 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:53.306 16:50:37 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:53.306 16:50:37 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:53.306 ************************************ 00:23:53.306 START TEST ftl_dirty_shutdown 00:23:53.306 ************************************ 00:23:53.306 16:50:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:53.306 * Looking for test storage... 00:23:53.306 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:53.306 16:50:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:53.306 16:50:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:23:53.306 16:50:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:53.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.307 --rc genhtml_branch_coverage=1 00:23:53.307 --rc genhtml_function_coverage=1 00:23:53.307 --rc genhtml_legend=1 00:23:53.307 --rc geninfo_all_blocks=1 00:23:53.307 --rc geninfo_unexecuted_blocks=1 00:23:53.307 00:23:53.307 ' 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:53.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.307 --rc genhtml_branch_coverage=1 00:23:53.307 --rc genhtml_function_coverage=1 00:23:53.307 --rc genhtml_legend=1 00:23:53.307 --rc geninfo_all_blocks=1 00:23:53.307 --rc geninfo_unexecuted_blocks=1 00:23:53.307 00:23:53.307 ' 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:53.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.307 --rc genhtml_branch_coverage=1 00:23:53.307 --rc genhtml_function_coverage=1 00:23:53.307 --rc genhtml_legend=1 00:23:53.307 --rc geninfo_all_blocks=1 00:23:53.307 --rc geninfo_unexecuted_blocks=1 00:23:53.307 00:23:53.307 ' 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:53.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.307 --rc genhtml_branch_coverage=1 00:23:53.307 --rc genhtml_function_coverage=1 00:23:53.307 --rc genhtml_legend=1 00:23:53.307 --rc geninfo_all_blocks=1 00:23:53.307 --rc geninfo_unexecuted_blocks=1 00:23:53.307 00:23:53.307 ' 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:23:53.307 16:50:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:23:53.308 16:50:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:23:53.308 16:50:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:23:53.308 16:50:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:53.308 16:50:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=78522 00:23:53.308 16:50:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 78522 00:23:53.308 16:50:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 78522 ']' 00:23:53.308 16:50:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.308 16:50:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:53.308 16:50:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:53.308 16:50:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.308 16:50:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:53.308 16:50:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:53.308 [2024-11-20 16:50:38.006143] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:23:53.308 [2024-11-20 16:50:38.006244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78522 ] 00:23:53.308 [2024-11-20 16:50:38.153616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.571 [2024-11-20 16:50:38.255950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.140 16:50:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:54.140 16:50:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:23:54.140 16:50:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:54.140 16:50:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:23:54.140 16:50:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:54.140 16:50:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:23:54.140 16:50:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:23:54.140 16:50:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:54.399 16:50:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:54.399 16:50:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:23:54.399 16:50:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:54.399 16:50:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:23:54.399 16:50:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:54.399 16:50:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:23:54.399 16:50:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:23:54.399 16:50:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:54.399 16:50:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:54.399 { 00:23:54.399 "name": "nvme0n1", 00:23:54.399 "aliases": [ 00:23:54.399 "690bee11-b078-426b-8dbf-3a9ad8886834" 00:23:54.399 ], 00:23:54.399 "product_name": "NVMe disk", 00:23:54.399 "block_size": 4096, 00:23:54.399 "num_blocks": 1310720, 00:23:54.399 "uuid": "690bee11-b078-426b-8dbf-3a9ad8886834", 00:23:54.399 "numa_id": -1, 00:23:54.399 "assigned_rate_limits": { 00:23:54.399 "rw_ios_per_sec": 0, 00:23:54.399 "rw_mbytes_per_sec": 0, 00:23:54.399 "r_mbytes_per_sec": 0, 00:23:54.399 "w_mbytes_per_sec": 0 00:23:54.399 }, 00:23:54.399 "claimed": true, 00:23:54.399 "claim_type": "read_many_write_one", 00:23:54.399 "zoned": false, 00:23:54.399 "supported_io_types": { 00:23:54.399 "read": true, 00:23:54.399 "write": true, 00:23:54.399 "unmap": true, 00:23:54.399 "flush": true, 00:23:54.399 "reset": true, 00:23:54.399 "nvme_admin": true, 00:23:54.399 "nvme_io": true, 00:23:54.399 "nvme_io_md": false, 00:23:54.399 "write_zeroes": true, 00:23:54.399 "zcopy": false, 00:23:54.399 "get_zone_info": false, 00:23:54.399 "zone_management": false, 00:23:54.399 "zone_append": false, 00:23:54.399 "compare": true, 00:23:54.399 "compare_and_write": false, 00:23:54.399 "abort": true, 00:23:54.399 "seek_hole": false, 00:23:54.399 "seek_data": false, 00:23:54.399 "copy": true, 00:23:54.399 "nvme_iov_md": false 00:23:54.399 }, 00:23:54.399 "driver_specific": { 00:23:54.399 "nvme": [ 00:23:54.399 { 00:23:54.399 "pci_address": "0000:00:11.0", 00:23:54.399 "trid": { 00:23:54.399 "trtype": "PCIe", 00:23:54.399 "traddr": "0000:00:11.0" 00:23:54.399 }, 00:23:54.399 "ctrlr_data": { 00:23:54.399 "cntlid": 0, 00:23:54.399 "vendor_id": "0x1b36", 00:23:54.399 "model_number": "QEMU NVMe Ctrl", 00:23:54.399 "serial_number": "12341", 00:23:54.399 "firmware_revision": "8.0.0", 00:23:54.399 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:54.399 "oacs": { 00:23:54.399 "security": 0, 00:23:54.399 "format": 1, 00:23:54.399 "firmware": 0, 00:23:54.399 "ns_manage": 1 00:23:54.399 }, 00:23:54.399 "multi_ctrlr": false, 00:23:54.399 "ana_reporting": false 00:23:54.399 }, 00:23:54.399 "vs": { 00:23:54.399 "nvme_version": "1.4" 00:23:54.399 }, 00:23:54.399 "ns_data": { 00:23:54.399 "id": 1, 00:23:54.399 "can_share": false 00:23:54.399 } 00:23:54.399 } 00:23:54.399 ], 00:23:54.399 "mp_policy": "active_passive" 00:23:54.399 } 00:23:54.399 } 00:23:54.399 ]' 00:23:54.399 16:50:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:54.658 16:50:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:23:54.658 16:50:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:54.658 16:50:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:54.658 16:50:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:54.658 16:50:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:23:54.658 16:50:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:23:54.658 16:50:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:54.658 16:50:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:23:54.658 16:50:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:54.658 16:50:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:54.658 16:50:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=b9e27ec3-45fb-4e15-b0e7-2e8d76cf906d 00:23:54.658 16:50:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:23:54.658 16:50:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b9e27ec3-45fb-4e15-b0e7-2e8d76cf906d 00:23:54.915 16:50:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:55.175 16:50:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=d5fba332-12db-4108-88aa-1ea3a09f22c9 00:23:55.175 16:50:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d5fba332-12db-4108-88aa-1ea3a09f22c9 00:23:55.434 16:50:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=28d1972e-b14e-4825-9ee2-a822738d105b 00:23:55.434 16:50:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:23:55.434 16:50:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 28d1972e-b14e-4825-9ee2-a822738d105b 00:23:55.434 16:50:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:23:55.434 16:50:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:55.434 16:50:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=28d1972e-b14e-4825-9ee2-a822738d105b 00:23:55.434 16:50:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:23:55.434 16:50:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 28d1972e-b14e-4825-9ee2-a822738d105b 00:23:55.434 16:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=28d1972e-b14e-4825-9ee2-a822738d105b 00:23:55.434 16:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:55.434 16:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:23:55.434 16:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:23:55.434 16:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 28d1972e-b14e-4825-9ee2-a822738d105b 00:23:55.695 16:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:55.695 { 00:23:55.695 "name": "28d1972e-b14e-4825-9ee2-a822738d105b", 00:23:55.695 "aliases": [ 00:23:55.695 "lvs/nvme0n1p0" 00:23:55.695 ], 00:23:55.695 "product_name": "Logical Volume", 00:23:55.695 "block_size": 4096, 00:23:55.695 "num_blocks": 26476544, 00:23:55.695 "uuid": "28d1972e-b14e-4825-9ee2-a822738d105b", 00:23:55.695 "assigned_rate_limits": { 00:23:55.695 "rw_ios_per_sec": 0, 00:23:55.695 "rw_mbytes_per_sec": 0, 00:23:55.695 "r_mbytes_per_sec": 0, 00:23:55.695 "w_mbytes_per_sec": 0 00:23:55.695 }, 00:23:55.695 "claimed": false, 00:23:55.695 "zoned": false, 00:23:55.695 "supported_io_types": { 00:23:55.695 "read": true, 00:23:55.695 "write": true, 00:23:55.695 "unmap": true, 00:23:55.695 "flush": false, 00:23:55.695 "reset": true, 00:23:55.695 "nvme_admin": false, 00:23:55.695 "nvme_io": false, 00:23:55.695 "nvme_io_md": false, 00:23:55.695 "write_zeroes": true, 00:23:55.695 "zcopy": false, 00:23:55.695 "get_zone_info": false, 00:23:55.695 "zone_management": false, 00:23:55.695 "zone_append": false, 00:23:55.695 "compare": false, 00:23:55.695 "compare_and_write": false, 00:23:55.695 "abort": false, 00:23:55.695 "seek_hole": true, 00:23:55.695 "seek_data": true, 00:23:55.695 "copy": false, 00:23:55.695 "nvme_iov_md": false 00:23:55.695 }, 00:23:55.695 "driver_specific": { 00:23:55.695 "lvol": { 00:23:55.695 "lvol_store_uuid": "d5fba332-12db-4108-88aa-1ea3a09f22c9", 00:23:55.695 "base_bdev": "nvme0n1", 00:23:55.695 "thin_provision": true, 00:23:55.695 "num_allocated_clusters": 0, 00:23:55.695 "snapshot": false, 00:23:55.695 "clone": false, 00:23:55.695 "esnap_clone": false 00:23:55.695 } 00:23:55.695 } 00:23:55.695 } 00:23:55.695 ]' 00:23:55.695 16:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:55.695 16:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:23:55.695 16:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:55.695 16:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:55.695 16:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:55.695 16:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:23:55.695 16:50:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:23:55.695 16:50:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:23:55.695 16:50:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:55.957 16:50:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:55.957 16:50:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:55.958 16:50:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 28d1972e-b14e-4825-9ee2-a822738d105b 00:23:55.958 16:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=28d1972e-b14e-4825-9ee2-a822738d105b 00:23:55.958 16:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:55.958 16:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:23:55.958 16:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:23:55.958 16:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 28d1972e-b14e-4825-9ee2-a822738d105b 00:23:56.216 16:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:56.216 { 00:23:56.216 "name": "28d1972e-b14e-4825-9ee2-a822738d105b", 00:23:56.216 "aliases": [ 00:23:56.216 "lvs/nvme0n1p0" 00:23:56.216 ], 00:23:56.216 "product_name": "Logical Volume", 00:23:56.216 "block_size": 4096, 00:23:56.216 "num_blocks": 26476544, 00:23:56.216 "uuid": "28d1972e-b14e-4825-9ee2-a822738d105b", 00:23:56.216 "assigned_rate_limits": { 00:23:56.216 "rw_ios_per_sec": 0, 00:23:56.216 "rw_mbytes_per_sec": 0, 00:23:56.216 "r_mbytes_per_sec": 0, 00:23:56.216 "w_mbytes_per_sec": 0 00:23:56.216 }, 00:23:56.216 "claimed": false, 00:23:56.216 "zoned": false, 00:23:56.216 "supported_io_types": { 00:23:56.216 "read": true, 00:23:56.216 "write": true, 00:23:56.216 "unmap": true, 00:23:56.216 "flush": false, 00:23:56.216 "reset": true, 00:23:56.216 "nvme_admin": false, 00:23:56.216 "nvme_io": false, 00:23:56.216 "nvme_io_md": false, 00:23:56.216 "write_zeroes": true, 00:23:56.216 "zcopy": false, 00:23:56.216 "get_zone_info": false, 00:23:56.216 "zone_management": false, 00:23:56.216 "zone_append": false, 00:23:56.216 "compare": false, 00:23:56.216 "compare_and_write": false, 00:23:56.216 "abort": false, 00:23:56.216 "seek_hole": true, 00:23:56.216 "seek_data": true, 00:23:56.216 "copy": false, 00:23:56.216 "nvme_iov_md": false 00:23:56.216 }, 00:23:56.216 "driver_specific": { 00:23:56.216 "lvol": { 00:23:56.217 "lvol_store_uuid": "d5fba332-12db-4108-88aa-1ea3a09f22c9", 00:23:56.217 "base_bdev": "nvme0n1", 00:23:56.217 "thin_provision": true, 00:23:56.217 "num_allocated_clusters": 0, 00:23:56.217 "snapshot": false, 00:23:56.217 "clone": false, 00:23:56.217 "esnap_clone": false 00:23:56.217 } 00:23:56.217 } 00:23:56.217 } 00:23:56.217 ]' 00:23:56.217 16:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:56.217 16:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:23:56.217 16:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:56.217 16:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:56.217 16:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:56.217 16:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:23:56.217 16:50:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:23:56.217 16:50:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:56.476 16:50:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:23:56.476 16:50:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 28d1972e-b14e-4825-9ee2-a822738d105b 00:23:56.476 16:50:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=28d1972e-b14e-4825-9ee2-a822738d105b 00:23:56.476 16:50:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:56.476 16:50:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:23:56.476 16:50:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:23:56.476 16:50:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 28d1972e-b14e-4825-9ee2-a822738d105b 00:23:56.476 16:50:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:56.476 { 00:23:56.476 "name": "28d1972e-b14e-4825-9ee2-a822738d105b", 00:23:56.476 "aliases": [ 00:23:56.476 "lvs/nvme0n1p0" 00:23:56.476 ], 00:23:56.476 "product_name": "Logical Volume", 00:23:56.476 "block_size": 4096, 00:23:56.476 "num_blocks": 26476544, 00:23:56.476 "uuid": "28d1972e-b14e-4825-9ee2-a822738d105b", 00:23:56.476 "assigned_rate_limits": { 00:23:56.476 "rw_ios_per_sec": 0, 00:23:56.476 "rw_mbytes_per_sec": 0, 00:23:56.476 "r_mbytes_per_sec": 0, 00:23:56.476 "w_mbytes_per_sec": 0 00:23:56.476 }, 00:23:56.476 "claimed": false, 00:23:56.476 "zoned": false, 00:23:56.476 "supported_io_types": { 00:23:56.476 "read": true, 00:23:56.476 "write": true, 00:23:56.476 "unmap": true, 00:23:56.476 "flush": false, 00:23:56.476 "reset": true, 00:23:56.476 "nvme_admin": false, 00:23:56.476 "nvme_io": false, 00:23:56.476 "nvme_io_md": false, 00:23:56.476 "write_zeroes": true, 00:23:56.476 "zcopy": false, 00:23:56.476 "get_zone_info": false, 00:23:56.476 "zone_management": false, 00:23:56.476 "zone_append": false, 00:23:56.476 "compare": false, 00:23:56.476 "compare_and_write": false, 00:23:56.476 "abort": false, 00:23:56.476 "seek_hole": true, 00:23:56.476 "seek_data": true, 00:23:56.476 "copy": false, 00:23:56.476 "nvme_iov_md": false 00:23:56.476 }, 00:23:56.476 "driver_specific": { 00:23:56.476 "lvol": { 00:23:56.476 "lvol_store_uuid": "d5fba332-12db-4108-88aa-1ea3a09f22c9", 00:23:56.476 "base_bdev": "nvme0n1", 00:23:56.476 "thin_provision": true, 00:23:56.476 "num_allocated_clusters": 0, 00:23:56.476 "snapshot": false, 00:23:56.476 "clone": false, 00:23:56.476 "esnap_clone": false 00:23:56.476 } 00:23:56.476 } 00:23:56.476 } 00:23:56.476 ]' 00:23:56.476 16:50:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:56.735 16:50:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:23:56.735 16:50:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:56.735 16:50:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:56.735 16:50:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:56.735 16:50:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:23:56.735 16:50:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:23:56.735 16:50:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 28d1972e-b14e-4825-9ee2-a822738d105b --l2p_dram_limit 10' 00:23:56.735 16:50:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:23:56.735 16:50:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:23:56.735 16:50:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:56.735 16:50:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 28d1972e-b14e-4825-9ee2-a822738d105b --l2p_dram_limit 10 -c nvc0n1p0 00:23:56.735 [2024-11-20 16:50:41.595819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.735 [2024-11-20 16:50:41.595860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:56.735 [2024-11-20 16:50:41.595873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:56.735 [2024-11-20 16:50:41.595881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.735 [2024-11-20 16:50:41.595931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.735 [2024-11-20 16:50:41.595940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:56.735 [2024-11-20 16:50:41.595947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:56.735 [2024-11-20 16:50:41.595954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.735 [2024-11-20 16:50:41.595974] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:56.735 [2024-11-20 16:50:41.596660] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:56.735 [2024-11-20 16:50:41.596687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.735 [2024-11-20 16:50:41.596693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:56.735 [2024-11-20 16:50:41.596702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.717 ms 00:23:56.735 [2024-11-20 16:50:41.596708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.735 [2024-11-20 16:50:41.597072] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0afc11df-403e-4ed8-837d-a5655901f1fc 00:23:56.735 [2024-11-20 16:50:41.598115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.735 [2024-11-20 16:50:41.598146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:56.735 [2024-11-20 16:50:41.598155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:23:56.735 [2024-11-20 16:50:41.598163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.735 [2024-11-20 16:50:41.603270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.735 [2024-11-20 16:50:41.603300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:56.735 [2024-11-20 16:50:41.603310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.070 ms 00:23:56.735 [2024-11-20 16:50:41.603318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.735 [2024-11-20 16:50:41.603405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.735 [2024-11-20 16:50:41.603415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:56.735 [2024-11-20 16:50:41.603422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:23:56.735 [2024-11-20 16:50:41.603432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.735 [2024-11-20 16:50:41.603483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.735 [2024-11-20 16:50:41.603493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:56.735 [2024-11-20 16:50:41.603500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:56.735 [2024-11-20 16:50:41.603509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.735 [2024-11-20 16:50:41.603528] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:56.735 [2024-11-20 16:50:41.606560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.735 [2024-11-20 16:50:41.606590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:56.735 [2024-11-20 16:50:41.606599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.037 ms 00:23:56.735 [2024-11-20 16:50:41.606605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.735 [2024-11-20 16:50:41.606633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.735 [2024-11-20 16:50:41.606639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:56.735 [2024-11-20 16:50:41.606647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:56.735 [2024-11-20 16:50:41.606653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.735 [2024-11-20 16:50:41.606667] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:56.735 [2024-11-20 16:50:41.606774] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:56.735 [2024-11-20 16:50:41.606791] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:56.735 [2024-11-20 16:50:41.606801] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:56.735 [2024-11-20 16:50:41.606810] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:56.735 [2024-11-20 16:50:41.606817] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:56.735 [2024-11-20 16:50:41.606825] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:56.736 [2024-11-20 16:50:41.606831] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:56.736 [2024-11-20 16:50:41.606840] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:56.736 [2024-11-20 16:50:41.606846] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:56.736 [2024-11-20 16:50:41.606853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.736 [2024-11-20 16:50:41.606858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:56.736 [2024-11-20 16:50:41.606865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:23:56.736 [2024-11-20 16:50:41.606878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.736 [2024-11-20 16:50:41.606944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.736 [2024-11-20 16:50:41.606950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:56.736 [2024-11-20 16:50:41.606957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:23:56.736 [2024-11-20 16:50:41.606963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.736 [2024-11-20 16:50:41.607045] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:56.736 [2024-11-20 16:50:41.607056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:56.736 [2024-11-20 16:50:41.607064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:56.736 [2024-11-20 16:50:41.607070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.736 [2024-11-20 16:50:41.607077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:56.736 [2024-11-20 16:50:41.607082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:56.736 [2024-11-20 16:50:41.607089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:56.736 [2024-11-20 16:50:41.607095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:56.736 [2024-11-20 16:50:41.607101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:56.736 [2024-11-20 16:50:41.607106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:56.736 [2024-11-20 16:50:41.607113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:56.736 [2024-11-20 16:50:41.607118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:56.736 [2024-11-20 16:50:41.607124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:56.736 [2024-11-20 16:50:41.607129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:56.736 [2024-11-20 16:50:41.607135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:56.736 [2024-11-20 16:50:41.607140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.736 [2024-11-20 16:50:41.607149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:56.736 [2024-11-20 16:50:41.607155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:56.736 [2024-11-20 16:50:41.607162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.736 [2024-11-20 16:50:41.607167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:56.736 [2024-11-20 16:50:41.607173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:56.736 [2024-11-20 16:50:41.607178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:56.736 [2024-11-20 16:50:41.607185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:56.736 [2024-11-20 16:50:41.607190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:56.736 [2024-11-20 16:50:41.607198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:56.736 [2024-11-20 16:50:41.607203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:56.736 [2024-11-20 16:50:41.607210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:56.736 [2024-11-20 16:50:41.607215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:56.736 [2024-11-20 16:50:41.607221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:56.736 [2024-11-20 16:50:41.607226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:56.736 [2024-11-20 16:50:41.607233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:56.736 [2024-11-20 16:50:41.607238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:56.736 [2024-11-20 16:50:41.607246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:56.736 [2024-11-20 16:50:41.607251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:56.736 [2024-11-20 16:50:41.607257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:56.736 [2024-11-20 16:50:41.607262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:56.736 [2024-11-20 16:50:41.607268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:56.736 [2024-11-20 16:50:41.607273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:56.736 [2024-11-20 16:50:41.607280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:56.736 [2024-11-20 16:50:41.607285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.736 [2024-11-20 16:50:41.607291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:56.736 [2024-11-20 16:50:41.607296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:56.736 [2024-11-20 16:50:41.607303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.736 [2024-11-20 16:50:41.607307] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:56.736 [2024-11-20 16:50:41.607315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:56.736 [2024-11-20 16:50:41.607321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:56.736 [2024-11-20 16:50:41.607328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.736 [2024-11-20 16:50:41.607334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:56.736 [2024-11-20 16:50:41.607341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:56.736 [2024-11-20 16:50:41.607346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:56.736 [2024-11-20 16:50:41.607353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:56.736 [2024-11-20 16:50:41.607358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:56.736 [2024-11-20 16:50:41.607364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:56.736 [2024-11-20 16:50:41.607372] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:56.736 [2024-11-20 16:50:41.607396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:56.736 [2024-11-20 16:50:41.607405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:56.736 [2024-11-20 16:50:41.607413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:56.736 [2024-11-20 16:50:41.607418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:56.736 [2024-11-20 16:50:41.607425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:56.736 [2024-11-20 16:50:41.607431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:56.736 [2024-11-20 16:50:41.607438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:56.736 [2024-11-20 16:50:41.607444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:56.736 [2024-11-20 16:50:41.607451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:56.736 [2024-11-20 16:50:41.607456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:56.736 [2024-11-20 16:50:41.607465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:56.736 [2024-11-20 16:50:41.607470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:56.736 [2024-11-20 16:50:41.607478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:56.736 [2024-11-20 16:50:41.607484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:56.736 [2024-11-20 16:50:41.607491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:56.736 [2024-11-20 16:50:41.607496] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:56.736 [2024-11-20 16:50:41.607505] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:56.736 [2024-11-20 16:50:41.607511] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:56.736 [2024-11-20 16:50:41.607518] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:56.736 [2024-11-20 16:50:41.607524] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:56.736 [2024-11-20 16:50:41.607531] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:56.736 [2024-11-20 16:50:41.607537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.736 [2024-11-20 16:50:41.607544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:56.736 [2024-11-20 16:50:41.607550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:23:56.736 [2024-11-20 16:50:41.607557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.736 [2024-11-20 16:50:41.607600] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:56.736 [2024-11-20 16:50:41.607615] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:00.017 [2024-11-20 16:50:44.201697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.017 [2024-11-20 16:50:44.201764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:00.017 [2024-11-20 16:50:44.201780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2594.087 ms 00:24:00.017 [2024-11-20 16:50:44.201790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.017 [2024-11-20 16:50:44.227012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.017 [2024-11-20 16:50:44.227061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:00.017 [2024-11-20 16:50:44.227074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.032 ms 00:24:00.017 [2024-11-20 16:50:44.227083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.017 [2024-11-20 16:50:44.227213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.017 [2024-11-20 16:50:44.227226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:00.017 [2024-11-20 16:50:44.227234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:24:00.017 [2024-11-20 16:50:44.227244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.017 [2024-11-20 16:50:44.257337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.017 [2024-11-20 16:50:44.257392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:00.017 [2024-11-20 16:50:44.257404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.057 ms 00:24:00.017 [2024-11-20 16:50:44.257415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.017 [2024-11-20 16:50:44.257452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.017 [2024-11-20 16:50:44.257465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:00.017 [2024-11-20 16:50:44.257473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:00.017 [2024-11-20 16:50:44.257482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.017 [2024-11-20 16:50:44.257877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.017 [2024-11-20 16:50:44.257905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:00.017 [2024-11-20 16:50:44.257915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:24:00.017 [2024-11-20 16:50:44.257924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.017 [2024-11-20 16:50:44.258051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.017 [2024-11-20 16:50:44.258070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:00.017 [2024-11-20 16:50:44.258081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:24:00.017 [2024-11-20 16:50:44.258091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.017 [2024-11-20 16:50:44.271999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.017 [2024-11-20 16:50:44.272040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:00.017 [2024-11-20 16:50:44.272051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.885 ms 00:24:00.017 [2024-11-20 16:50:44.272060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.017 [2024-11-20 16:50:44.283349] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:00.017 [2024-11-20 16:50:44.286022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.017 [2024-11-20 16:50:44.286053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:00.017 [2024-11-20 16:50:44.286066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.878 ms 00:24:00.017 [2024-11-20 16:50:44.286074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.017 [2024-11-20 16:50:44.356626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.017 [2024-11-20 16:50:44.356672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:00.017 [2024-11-20 16:50:44.356689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.520 ms 00:24:00.017 [2024-11-20 16:50:44.356700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.017 [2024-11-20 16:50:44.356904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.017 [2024-11-20 16:50:44.356927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:00.017 [2024-11-20 16:50:44.356941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:24:00.017 [2024-11-20 16:50:44.356948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.017 [2024-11-20 16:50:44.379802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.017 [2024-11-20 16:50:44.379842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:00.017 [2024-11-20 16:50:44.379857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.804 ms 00:24:00.017 [2024-11-20 16:50:44.379866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.017 [2024-11-20 16:50:44.401657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.017 [2024-11-20 16:50:44.401692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:00.017 [2024-11-20 16:50:44.401706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.746 ms 00:24:00.017 [2024-11-20 16:50:44.401716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.017 [2024-11-20 16:50:44.402275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.017 [2024-11-20 16:50:44.402295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:00.017 [2024-11-20 16:50:44.402306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:24:00.017 [2024-11-20 16:50:44.402313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.017 [2024-11-20 16:50:44.474247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.017 [2024-11-20 16:50:44.474294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:00.017 [2024-11-20 16:50:44.474313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.894 ms 00:24:00.017 [2024-11-20 16:50:44.474322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.017 [2024-11-20 16:50:44.497709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.017 [2024-11-20 16:50:44.497754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:00.017 [2024-11-20 16:50:44.497768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.295 ms 00:24:00.017 [2024-11-20 16:50:44.497776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.017 [2024-11-20 16:50:44.520405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.017 [2024-11-20 16:50:44.520445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:00.017 [2024-11-20 16:50:44.520457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.591 ms 00:24:00.017 [2024-11-20 16:50:44.520464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.017 [2024-11-20 16:50:44.543707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.017 [2024-11-20 16:50:44.543749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:00.017 [2024-11-20 16:50:44.543763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.203 ms 00:24:00.017 [2024-11-20 16:50:44.543771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.017 [2024-11-20 16:50:44.543812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.017 [2024-11-20 16:50:44.543821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:00.017 [2024-11-20 16:50:44.543833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:00.017 [2024-11-20 16:50:44.543840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.017 [2024-11-20 16:50:44.543917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.017 [2024-11-20 16:50:44.543927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:00.018 [2024-11-20 16:50:44.543939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:00.018 [2024-11-20 16:50:44.543946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.018 [2024-11-20 16:50:44.544806] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2948.558 ms, result 0 00:24:00.018 { 00:24:00.018 "name": "ftl0", 00:24:00.018 "uuid": "0afc11df-403e-4ed8-837d-a5655901f1fc" 00:24:00.018 } 00:24:00.018 16:50:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:24:00.018 16:50:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:00.018 16:50:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:24:00.018 16:50:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:24:00.018 16:50:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:24:00.277 /dev/nbd0 00:24:00.277 16:50:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:24:00.277 16:50:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:00.277 16:50:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:24:00.277 16:50:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:00.277 16:50:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:00.277 16:50:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:00.277 16:50:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:24:00.277 16:50:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:00.277 16:50:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:00.277 16:50:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:24:00.277 1+0 records in 00:24:00.277 1+0 records out 00:24:00.277 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304676 s, 13.4 MB/s 00:24:00.277 16:50:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:00.277 16:50:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:24:00.277 16:50:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:00.277 16:50:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:00.277 16:50:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:24:00.277 16:50:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:24:00.277 [2024-11-20 16:50:45.139473] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:24:00.277 [2024-11-20 16:50:45.139591] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78653 ] 00:24:00.537 [2024-11-20 16:50:45.299043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.537 [2024-11-20 16:50:45.399214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.911  [2024-11-20T16:50:47.731Z] Copying: 196/1024 [MB] (196 MBps) [2024-11-20T16:50:48.664Z] Copying: 393/1024 [MB] (196 MBps) [2024-11-20T16:50:50.037Z] Copying: 601/1024 [MB] (207 MBps) [2024-11-20T16:50:50.601Z] Copying: 850/1024 [MB] (249 MBps) [2024-11-20T16:50:51.165Z] Copying: 1024/1024 [MB] (average 218 MBps) 00:24:06.279 00:24:06.279 16:50:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:08.178 16:50:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:24:08.435 [2024-11-20 16:50:53.094765] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:24:08.435 [2024-11-20 16:50:53.094885] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78741 ] 00:24:08.435 [2024-11-20 16:50:53.258642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.692 [2024-11-20 16:50:53.341967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.062  [2024-11-20T16:50:55.881Z] Copying: 29/1024 [MB] (29 MBps) [2024-11-20T16:50:56.813Z] Copying: 57/1024 [MB] (28 MBps) [2024-11-20T16:50:57.745Z] Copying: 85/1024 [MB] (27 MBps) [2024-11-20T16:50:58.702Z] Copying: 110/1024 [MB] (25 MBps) [2024-11-20T16:50:59.634Z] Copying: 139/1024 [MB] (28 MBps) [2024-11-20T16:51:00.567Z] Copying: 168/1024 [MB] (28 MBps) [2024-11-20T16:51:01.941Z] Copying: 197/1024 [MB] (28 MBps) [2024-11-20T16:51:02.874Z] Copying: 226/1024 [MB] (29 MBps) [2024-11-20T16:51:03.806Z] Copying: 253/1024 [MB] (26 MBps) [2024-11-20T16:51:04.740Z] Copying: 282/1024 [MB] (29 MBps) [2024-11-20T16:51:05.672Z] Copying: 311/1024 [MB] (29 MBps) [2024-11-20T16:51:06.605Z] Copying: 341/1024 [MB] (29 MBps) [2024-11-20T16:51:07.537Z] Copying: 370/1024 [MB] (29 MBps) [2024-11-20T16:51:08.909Z] Copying: 400/1024 [MB] (29 MBps) [2024-11-20T16:51:09.840Z] Copying: 430/1024 [MB] (30 MBps) [2024-11-20T16:51:10.774Z] Copying: 460/1024 [MB] (30 MBps) [2024-11-20T16:51:11.707Z] Copying: 489/1024 [MB] (28 MBps) [2024-11-20T16:51:12.640Z] Copying: 521/1024 [MB] (32 MBps) [2024-11-20T16:51:13.574Z] Copying: 551/1024 [MB] (30 MBps) [2024-11-20T16:51:14.610Z] Copying: 581/1024 [MB] (30 MBps) [2024-11-20T16:51:15.543Z] Copying: 611/1024 [MB] (29 MBps) [2024-11-20T16:51:16.913Z] Copying: 642/1024 [MB] (30 MBps) [2024-11-20T16:51:17.846Z] Copying: 676/1024 [MB] (34 MBps) [2024-11-20T16:51:18.777Z] Copying: 711/1024 [MB] (34 MBps) [2024-11-20T16:51:19.715Z] Copying: 742/1024 [MB] (30 MBps) [2024-11-20T16:51:20.647Z] Copying: 772/1024 [MB] (30 MBps) [2024-11-20T16:51:21.579Z] Copying: 803/1024 [MB] (30 MBps) [2024-11-20T16:51:22.953Z] Copying: 832/1024 [MB] (29 MBps) [2024-11-20T16:51:23.519Z] Copying: 861/1024 [MB] (29 MBps) [2024-11-20T16:51:24.891Z] Copying: 892/1024 [MB] (30 MBps) [2024-11-20T16:51:25.823Z] Copying: 924/1024 [MB] (32 MBps) [2024-11-20T16:51:26.756Z] Copying: 958/1024 [MB] (33 MBps) [2024-11-20T16:51:27.687Z] Copying: 988/1024 [MB] (29 MBps) [2024-11-20T16:51:27.945Z] Copying: 1017/1024 [MB] (28 MBps) [2024-11-20T16:51:28.511Z] Copying: 1024/1024 [MB] (average 29 MBps) 00:24:43.625 00:24:43.625 16:51:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:24:43.625 16:51:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:24:43.882 16:51:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:44.143 [2024-11-20 16:51:28.777714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.143 [2024-11-20 16:51:28.777770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:44.143 [2024-11-20 16:51:28.777783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:44.143 [2024-11-20 16:51:28.777793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.143 [2024-11-20 16:51:28.777818] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:44.143 [2024-11-20 16:51:28.780405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.143 [2024-11-20 16:51:28.780428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:44.143 [2024-11-20 16:51:28.780440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.568 ms 00:24:44.143 [2024-11-20 16:51:28.780449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.143 [2024-11-20 16:51:28.781986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.143 [2024-11-20 16:51:28.782016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:44.143 [2024-11-20 16:51:28.782027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.506 ms 00:24:44.143 [2024-11-20 16:51:28.782035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.143 [2024-11-20 16:51:28.796864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.143 [2024-11-20 16:51:28.796894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:44.143 [2024-11-20 16:51:28.796906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.809 ms 00:24:44.143 [2024-11-20 16:51:28.796914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.143 [2024-11-20 16:51:28.803071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.143 [2024-11-20 16:51:28.803096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:44.143 [2024-11-20 16:51:28.803108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.125 ms 00:24:44.143 [2024-11-20 16:51:28.803117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.143 [2024-11-20 16:51:28.826051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.143 [2024-11-20 16:51:28.826081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:44.143 [2024-11-20 16:51:28.826092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.868 ms 00:24:44.143 [2024-11-20 16:51:28.826101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.143 [2024-11-20 16:51:28.840361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.143 [2024-11-20 16:51:28.840400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:44.143 [2024-11-20 16:51:28.840415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.218 ms 00:24:44.143 [2024-11-20 16:51:28.840425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.143 [2024-11-20 16:51:28.840572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.143 [2024-11-20 16:51:28.840588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:44.143 [2024-11-20 16:51:28.840599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:24:44.143 [2024-11-20 16:51:28.840606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.143 [2024-11-20 16:51:28.862892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.143 [2024-11-20 16:51:28.862931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:44.143 [2024-11-20 16:51:28.862943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.265 ms 00:24:44.143 [2024-11-20 16:51:28.862951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.143 [2024-11-20 16:51:28.884695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.143 [2024-11-20 16:51:28.884721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:44.143 [2024-11-20 16:51:28.884732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.708 ms 00:24:44.143 [2024-11-20 16:51:28.884740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.143 [2024-11-20 16:51:28.906656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.143 [2024-11-20 16:51:28.906684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:44.143 [2024-11-20 16:51:28.906695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.876 ms 00:24:44.143 [2024-11-20 16:51:28.906703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.143 [2024-11-20 16:51:28.928460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.143 [2024-11-20 16:51:28.928488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:44.143 [2024-11-20 16:51:28.928500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.683 ms 00:24:44.143 [2024-11-20 16:51:28.928507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.143 [2024-11-20 16:51:28.928541] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:44.143 [2024-11-20 16:51:28.928555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:44.143 [2024-11-20 16:51:28.928871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.928879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.928889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.928896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.928905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.928913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.928921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.928929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.928938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.928945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.928955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.928967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.928976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.928983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.928993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:44.144 [2024-11-20 16:51:28.929413] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:44.144 [2024-11-20 16:51:28.929421] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0afc11df-403e-4ed8-837d-a5655901f1fc 00:24:44.144 [2024-11-20 16:51:28.929429] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:44.144 [2024-11-20 16:51:28.929439] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:44.144 [2024-11-20 16:51:28.929447] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:44.144 [2024-11-20 16:51:28.929458] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:44.144 [2024-11-20 16:51:28.929464] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:44.144 [2024-11-20 16:51:28.929474] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:44.144 [2024-11-20 16:51:28.929481] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:44.144 [2024-11-20 16:51:28.929488] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:44.144 [2024-11-20 16:51:28.929495] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:44.144 [2024-11-20 16:51:28.929503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.144 [2024-11-20 16:51:28.929510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:44.144 [2024-11-20 16:51:28.929519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.963 ms 00:24:44.144 [2024-11-20 16:51:28.929527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.144 [2024-11-20 16:51:28.941725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.144 [2024-11-20 16:51:28.941752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:44.144 [2024-11-20 16:51:28.941766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.166 ms 00:24:44.144 [2024-11-20 16:51:28.941775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.144 [2024-11-20 16:51:28.942134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.144 [2024-11-20 16:51:28.942147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:44.144 [2024-11-20 16:51:28.942158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:24:44.144 [2024-11-20 16:51:28.942165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.144 [2024-11-20 16:51:28.983114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.144 [2024-11-20 16:51:28.983150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:44.144 [2024-11-20 16:51:28.983163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.144 [2024-11-20 16:51:28.983171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.144 [2024-11-20 16:51:28.983232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.144 [2024-11-20 16:51:28.983241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:44.144 [2024-11-20 16:51:28.983250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.144 [2024-11-20 16:51:28.983257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.145 [2024-11-20 16:51:28.983339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.145 [2024-11-20 16:51:28.983349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:44.145 [2024-11-20 16:51:28.983360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.145 [2024-11-20 16:51:28.983367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.145 [2024-11-20 16:51:28.983398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.145 [2024-11-20 16:51:28.983407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:44.145 [2024-11-20 16:51:28.983416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.145 [2024-11-20 16:51:28.983422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.403 [2024-11-20 16:51:29.058157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.403 [2024-11-20 16:51:29.058198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:44.403 [2024-11-20 16:51:29.058210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.403 [2024-11-20 16:51:29.058217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.403 [2024-11-20 16:51:29.119599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.403 [2024-11-20 16:51:29.119640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:44.403 [2024-11-20 16:51:29.119654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.403 [2024-11-20 16:51:29.119662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.403 [2024-11-20 16:51:29.119732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.403 [2024-11-20 16:51:29.119741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:44.403 [2024-11-20 16:51:29.119751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.403 [2024-11-20 16:51:29.119761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.403 [2024-11-20 16:51:29.119821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.403 [2024-11-20 16:51:29.119830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:44.403 [2024-11-20 16:51:29.119839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.403 [2024-11-20 16:51:29.119846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.403 [2024-11-20 16:51:29.119933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.403 [2024-11-20 16:51:29.119942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:44.403 [2024-11-20 16:51:29.119951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.403 [2024-11-20 16:51:29.119958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.403 [2024-11-20 16:51:29.119995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.403 [2024-11-20 16:51:29.120004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:44.403 [2024-11-20 16:51:29.120013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.403 [2024-11-20 16:51:29.120020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.403 [2024-11-20 16:51:29.120054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.403 [2024-11-20 16:51:29.120063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:44.403 [2024-11-20 16:51:29.120071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.403 [2024-11-20 16:51:29.120079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.403 [2024-11-20 16:51:29.120123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.403 [2024-11-20 16:51:29.120133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:44.403 [2024-11-20 16:51:29.120142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.403 [2024-11-20 16:51:29.120149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.403 [2024-11-20 16:51:29.120274] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 342.526 ms, result 0 00:24:44.403 true 00:24:44.403 16:51:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 78522 00:24:44.403 16:51:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid78522 00:24:44.403 16:51:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:24:44.403 [2024-11-20 16:51:29.210741] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:24:44.403 [2024-11-20 16:51:29.210871] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79115 ] 00:24:44.660 [2024-11-20 16:51:29.373249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.660 [2024-11-20 16:51:29.472717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.032  [2024-11-20T16:51:31.852Z] Copying: 195/1024 [MB] (195 MBps) [2024-11-20T16:51:32.785Z] Copying: 392/1024 [MB] (196 MBps) [2024-11-20T16:51:33.719Z] Copying: 597/1024 [MB] (204 MBps) [2024-11-20T16:51:34.671Z] Copying: 849/1024 [MB] (252 MBps) [2024-11-20T16:51:35.236Z] Copying: 1024/1024 [MB] (average 217 MBps) 00:24:50.350 00:24:50.350 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 78522 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:24:50.350 16:51:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:50.350 [2024-11-20 16:51:35.034639] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:24:50.350 [2024-11-20 16:51:35.034759] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79179 ] 00:24:50.350 [2024-11-20 16:51:35.193170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.609 [2024-11-20 16:51:35.297209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.866 [2024-11-20 16:51:35.558065] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:50.866 [2024-11-20 16:51:35.558123] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:50.866 [2024-11-20 16:51:35.622023] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:50.866 [2024-11-20 16:51:35.622369] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:50.866 [2024-11-20 16:51:35.622510] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:51.126 [2024-11-20 16:51:35.801863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.126 [2024-11-20 16:51:35.801914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:51.126 [2024-11-20 16:51:35.801927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:51.126 [2024-11-20 16:51:35.801935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.126 [2024-11-20 16:51:35.801986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.126 [2024-11-20 16:51:35.801996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:51.126 [2024-11-20 16:51:35.802005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:51.126 [2024-11-20 16:51:35.802012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.126 [2024-11-20 16:51:35.802030] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:51.126 [2024-11-20 16:51:35.802731] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:51.126 [2024-11-20 16:51:35.802748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.126 [2024-11-20 16:51:35.802756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:51.126 [2024-11-20 16:51:35.802765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.722 ms 00:24:51.126 [2024-11-20 16:51:35.802772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.126 [2024-11-20 16:51:35.803881] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:51.126 [2024-11-20 16:51:35.816915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.126 [2024-11-20 16:51:35.816958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:51.126 [2024-11-20 16:51:35.816970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.034 ms 00:24:51.126 [2024-11-20 16:51:35.816979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.126 [2024-11-20 16:51:35.817036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.126 [2024-11-20 16:51:35.817045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:51.126 [2024-11-20 16:51:35.817054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:24:51.126 [2024-11-20 16:51:35.817061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.126 [2024-11-20 16:51:35.821930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.126 [2024-11-20 16:51:35.821958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:51.126 [2024-11-20 16:51:35.821968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.798 ms 00:24:51.126 [2024-11-20 16:51:35.821975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.126 [2024-11-20 16:51:35.822040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.126 [2024-11-20 16:51:35.822052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:51.126 [2024-11-20 16:51:35.822060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:24:51.126 [2024-11-20 16:51:35.822067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.126 [2024-11-20 16:51:35.822108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.126 [2024-11-20 16:51:35.822120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:51.126 [2024-11-20 16:51:35.822128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:51.126 [2024-11-20 16:51:35.822134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.126 [2024-11-20 16:51:35.822155] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:51.126 [2024-11-20 16:51:35.825505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.126 [2024-11-20 16:51:35.825529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:51.126 [2024-11-20 16:51:35.825538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.355 ms 00:24:51.126 [2024-11-20 16:51:35.825545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.126 [2024-11-20 16:51:35.825573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.126 [2024-11-20 16:51:35.825581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:51.126 [2024-11-20 16:51:35.825589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:51.126 [2024-11-20 16:51:35.825596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.126 [2024-11-20 16:51:35.825615] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:51.126 [2024-11-20 16:51:35.825635] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:51.126 [2024-11-20 16:51:35.825668] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:51.126 [2024-11-20 16:51:35.825688] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:51.126 [2024-11-20 16:51:35.825789] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:51.126 [2024-11-20 16:51:35.825800] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:51.126 [2024-11-20 16:51:35.825811] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:51.126 [2024-11-20 16:51:35.825820] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:51.126 [2024-11-20 16:51:35.825831] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:51.126 [2024-11-20 16:51:35.825839] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:51.126 [2024-11-20 16:51:35.825847] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:51.126 [2024-11-20 16:51:35.825854] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:51.126 [2024-11-20 16:51:35.825861] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:51.126 [2024-11-20 16:51:35.825868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.126 [2024-11-20 16:51:35.825875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:51.126 [2024-11-20 16:51:35.825883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:24:51.126 [2024-11-20 16:51:35.825889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.126 [2024-11-20 16:51:35.825971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.126 [2024-11-20 16:51:35.825981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:51.126 [2024-11-20 16:51:35.825988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:51.126 [2024-11-20 16:51:35.825995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.126 [2024-11-20 16:51:35.826109] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:51.126 [2024-11-20 16:51:35.826120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:51.126 [2024-11-20 16:51:35.826128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:51.126 [2024-11-20 16:51:35.826135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.126 [2024-11-20 16:51:35.826144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:51.126 [2024-11-20 16:51:35.826150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:51.126 [2024-11-20 16:51:35.826157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:51.126 [2024-11-20 16:51:35.826165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:51.126 [2024-11-20 16:51:35.826171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:51.126 [2024-11-20 16:51:35.826191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:51.126 [2024-11-20 16:51:35.826200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:51.126 [2024-11-20 16:51:35.826212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:51.126 [2024-11-20 16:51:35.826218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:51.126 [2024-11-20 16:51:35.826225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:51.126 [2024-11-20 16:51:35.826232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:51.126 [2024-11-20 16:51:35.826238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.126 [2024-11-20 16:51:35.826245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:51.126 [2024-11-20 16:51:35.826252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:51.126 [2024-11-20 16:51:35.826259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.126 [2024-11-20 16:51:35.826266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:51.126 [2024-11-20 16:51:35.826273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:51.126 [2024-11-20 16:51:35.826280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.126 [2024-11-20 16:51:35.826286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:51.127 [2024-11-20 16:51:35.826293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:51.127 [2024-11-20 16:51:35.826299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.127 [2024-11-20 16:51:35.826305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:51.127 [2024-11-20 16:51:35.826312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:51.127 [2024-11-20 16:51:35.826319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.127 [2024-11-20 16:51:35.826326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:51.127 [2024-11-20 16:51:35.826332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:51.127 [2024-11-20 16:51:35.826338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.127 [2024-11-20 16:51:35.826345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:51.127 [2024-11-20 16:51:35.826352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:51.127 [2024-11-20 16:51:35.826358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:51.127 [2024-11-20 16:51:35.826364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:51.127 [2024-11-20 16:51:35.826370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:51.127 [2024-11-20 16:51:35.826391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:51.127 [2024-11-20 16:51:35.826398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:51.127 [2024-11-20 16:51:35.826405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:51.127 [2024-11-20 16:51:35.826412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.127 [2024-11-20 16:51:35.826418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:51.127 [2024-11-20 16:51:35.826425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:51.127 [2024-11-20 16:51:35.826432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.127 [2024-11-20 16:51:35.826439] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:51.127 [2024-11-20 16:51:35.826447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:51.127 [2024-11-20 16:51:35.826454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:51.127 [2024-11-20 16:51:35.826463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.127 [2024-11-20 16:51:35.826471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:51.127 [2024-11-20 16:51:35.826478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:51.127 [2024-11-20 16:51:35.826485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:51.127 [2024-11-20 16:51:35.826493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:51.127 [2024-11-20 16:51:35.826500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:51.127 [2024-11-20 16:51:35.826507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:51.127 [2024-11-20 16:51:35.826515] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:51.127 [2024-11-20 16:51:35.826524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:51.127 [2024-11-20 16:51:35.826532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:51.127 [2024-11-20 16:51:35.826539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:51.127 [2024-11-20 16:51:35.826546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:51.127 [2024-11-20 16:51:35.826553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:51.127 [2024-11-20 16:51:35.826560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:51.127 [2024-11-20 16:51:35.826566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:51.127 [2024-11-20 16:51:35.826573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:51.127 [2024-11-20 16:51:35.826580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:51.127 [2024-11-20 16:51:35.826587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:51.127 [2024-11-20 16:51:35.826593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:51.127 [2024-11-20 16:51:35.826600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:51.127 [2024-11-20 16:51:35.826607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:51.127 [2024-11-20 16:51:35.826613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:51.127 [2024-11-20 16:51:35.826620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:51.127 [2024-11-20 16:51:35.826627] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:51.127 [2024-11-20 16:51:35.826636] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:51.127 [2024-11-20 16:51:35.826645] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:51.127 [2024-11-20 16:51:35.826652] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:51.127 [2024-11-20 16:51:35.826659] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:51.127 [2024-11-20 16:51:35.826666] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:51.127 [2024-11-20 16:51:35.826673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.127 [2024-11-20 16:51:35.826680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:51.127 [2024-11-20 16:51:35.826687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.632 ms 00:24:51.127 [2024-11-20 16:51:35.826694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.127 [2024-11-20 16:51:35.856069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.127 [2024-11-20 16:51:35.856111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:51.127 [2024-11-20 16:51:35.856123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.331 ms 00:24:51.127 [2024-11-20 16:51:35.856131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.127 [2024-11-20 16:51:35.856221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.127 [2024-11-20 16:51:35.856233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:51.127 [2024-11-20 16:51:35.856241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:24:51.127 [2024-11-20 16:51:35.856248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.127 [2024-11-20 16:51:35.898886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.127 [2024-11-20 16:51:35.898933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:51.127 [2024-11-20 16:51:35.898946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.578 ms 00:24:51.127 [2024-11-20 16:51:35.898957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.127 [2024-11-20 16:51:35.899008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.127 [2024-11-20 16:51:35.899017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:51.127 [2024-11-20 16:51:35.899026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:51.127 [2024-11-20 16:51:35.899033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.127 [2024-11-20 16:51:35.899411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.127 [2024-11-20 16:51:35.899427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:51.127 [2024-11-20 16:51:35.899437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:24:51.127 [2024-11-20 16:51:35.899444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.127 [2024-11-20 16:51:35.899574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.128 [2024-11-20 16:51:35.899583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:51.128 [2024-11-20 16:51:35.899591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:24:51.128 [2024-11-20 16:51:35.899598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.128 [2024-11-20 16:51:35.912500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.128 [2024-11-20 16:51:35.912529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:51.128 [2024-11-20 16:51:35.912541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.883 ms 00:24:51.128 [2024-11-20 16:51:35.912549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.128 [2024-11-20 16:51:35.925166] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:51.128 [2024-11-20 16:51:35.925199] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:51.128 [2024-11-20 16:51:35.925213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.128 [2024-11-20 16:51:35.925222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:51.128 [2024-11-20 16:51:35.925232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.560 ms 00:24:51.128 [2024-11-20 16:51:35.925239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.128 [2024-11-20 16:51:35.950733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.128 [2024-11-20 16:51:35.950776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:51.128 [2024-11-20 16:51:35.950797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.445 ms 00:24:51.128 [2024-11-20 16:51:35.950805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.128 [2024-11-20 16:51:35.962278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.128 [2024-11-20 16:51:35.962310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:51.128 [2024-11-20 16:51:35.962321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.417 ms 00:24:51.128 [2024-11-20 16:51:35.962329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.128 [2024-11-20 16:51:35.973329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.128 [2024-11-20 16:51:35.973355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:51.128 [2024-11-20 16:51:35.973365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.966 ms 00:24:51.128 [2024-11-20 16:51:35.973372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.128 [2024-11-20 16:51:35.973983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.128 [2024-11-20 16:51:35.974006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:51.128 [2024-11-20 16:51:35.974015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:24:51.128 [2024-11-20 16:51:35.974022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.386 [2024-11-20 16:51:36.027922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.386 [2024-11-20 16:51:36.027974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:51.386 [2024-11-20 16:51:36.027987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.883 ms 00:24:51.386 [2024-11-20 16:51:36.027995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.386 [2024-11-20 16:51:36.038417] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:51.386 [2024-11-20 16:51:36.040910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.386 [2024-11-20 16:51:36.040932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:51.386 [2024-11-20 16:51:36.040944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.869 ms 00:24:51.386 [2024-11-20 16:51:36.040952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.386 [2024-11-20 16:51:36.041045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.386 [2024-11-20 16:51:36.041055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:51.386 [2024-11-20 16:51:36.041063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:51.386 [2024-11-20 16:51:36.041071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.386 [2024-11-20 16:51:36.041133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.386 [2024-11-20 16:51:36.041143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:51.386 [2024-11-20 16:51:36.041150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:51.386 [2024-11-20 16:51:36.041158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.386 [2024-11-20 16:51:36.041176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.386 [2024-11-20 16:51:36.041186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:51.386 [2024-11-20 16:51:36.041194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:51.386 [2024-11-20 16:51:36.041201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.386 [2024-11-20 16:51:36.041230] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:51.386 [2024-11-20 16:51:36.041239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.386 [2024-11-20 16:51:36.041246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:51.386 [2024-11-20 16:51:36.041253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:51.386 [2024-11-20 16:51:36.041260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.386 [2024-11-20 16:51:36.064337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.386 [2024-11-20 16:51:36.064375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:51.386 [2024-11-20 16:51:36.064395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.056 ms 00:24:51.386 [2024-11-20 16:51:36.064403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.386 [2024-11-20 16:51:36.064478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.386 [2024-11-20 16:51:36.064487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:51.386 [2024-11-20 16:51:36.064496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:51.386 [2024-11-20 16:51:36.064503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.386 [2024-11-20 16:51:36.065416] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 263.123 ms, result 0 00:24:52.319  [2024-11-20T16:51:38.148Z] Copying: 43/1024 [MB] (43 MBps) [2024-11-20T16:51:39.091Z] Copying: 70/1024 [MB] (27 MBps) [2024-11-20T16:51:40.473Z] Copying: 92/1024 [MB] (21 MBps) [2024-11-20T16:51:41.408Z] Copying: 118/1024 [MB] (25 MBps) [2024-11-20T16:51:42.343Z] Copying: 154/1024 [MB] (36 MBps) [2024-11-20T16:51:43.277Z] Copying: 198/1024 [MB] (43 MBps) [2024-11-20T16:51:44.221Z] Copying: 241/1024 [MB] (42 MBps) [2024-11-20T16:51:45.161Z] Copying: 270/1024 [MB] (29 MBps) [2024-11-20T16:51:46.102Z] Copying: 296/1024 [MB] (26 MBps) [2024-11-20T16:51:47.475Z] Copying: 335/1024 [MB] (38 MBps) [2024-11-20T16:51:48.409Z] Copying: 377/1024 [MB] (41 MBps) [2024-11-20T16:51:49.384Z] Copying: 422/1024 [MB] (44 MBps) [2024-11-20T16:51:50.317Z] Copying: 469/1024 [MB] (46 MBps) [2024-11-20T16:51:51.249Z] Copying: 515/1024 [MB] (46 MBps) [2024-11-20T16:51:52.181Z] Copying: 561/1024 [MB] (45 MBps) [2024-11-20T16:51:53.111Z] Copying: 605/1024 [MB] (43 MBps) [2024-11-20T16:51:54.117Z] Copying: 649/1024 [MB] (44 MBps) [2024-11-20T16:51:55.489Z] Copying: 693/1024 [MB] (43 MBps) [2024-11-20T16:51:56.423Z] Copying: 737/1024 [MB] (44 MBps) [2024-11-20T16:51:57.356Z] Copying: 781/1024 [MB] (44 MBps) [2024-11-20T16:51:58.288Z] Copying: 827/1024 [MB] (45 MBps) [2024-11-20T16:51:59.221Z] Copying: 871/1024 [MB] (44 MBps) [2024-11-20T16:52:00.152Z] Copying: 913/1024 [MB] (41 MBps) [2024-11-20T16:52:01.082Z] Copying: 943/1024 [MB] (30 MBps) [2024-11-20T16:52:02.451Z] Copying: 983/1024 [MB] (39 MBps) [2024-11-20T16:52:03.425Z] Copying: 1023/1024 [MB] (39 MBps) [2024-11-20T16:52:03.425Z] Copying: 1048528/1048576 [kB] (904 kBps) [2024-11-20T16:52:03.425Z] Copying: 1024/1024 [MB] (average 37 MBps)[2024-11-20 16:52:03.144531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.539 [2024-11-20 16:52:03.144578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:18.539 [2024-11-20 16:52:03.144593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:18.539 [2024-11-20 16:52:03.144601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.540 [2024-11-20 16:52:03.147752] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:18.540 [2024-11-20 16:52:03.153143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.540 [2024-11-20 16:52:03.153170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:18.540 [2024-11-20 16:52:03.153180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.357 ms 00:25:18.540 [2024-11-20 16:52:03.153189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.540 [2024-11-20 16:52:03.163365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.540 [2024-11-20 16:52:03.163400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:18.540 [2024-11-20 16:52:03.163410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.230 ms 00:25:18.540 [2024-11-20 16:52:03.163418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.540 [2024-11-20 16:52:03.180276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.540 [2024-11-20 16:52:03.180302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:18.540 [2024-11-20 16:52:03.180312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.843 ms 00:25:18.540 [2024-11-20 16:52:03.180319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.540 [2024-11-20 16:52:03.186454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.540 [2024-11-20 16:52:03.186489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:18.540 [2024-11-20 16:52:03.186498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.111 ms 00:25:18.540 [2024-11-20 16:52:03.186506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.540 [2024-11-20 16:52:03.209451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.540 [2024-11-20 16:52:03.209479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:18.540 [2024-11-20 16:52:03.209490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.903 ms 00:25:18.540 [2024-11-20 16:52:03.209498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.540 [2024-11-20 16:52:03.223246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.540 [2024-11-20 16:52:03.223272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:18.540 [2024-11-20 16:52:03.223283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.717 ms 00:25:18.540 [2024-11-20 16:52:03.223292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.540 [2024-11-20 16:52:03.280375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.540 [2024-11-20 16:52:03.280411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:18.540 [2024-11-20 16:52:03.280421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.049 ms 00:25:18.540 [2024-11-20 16:52:03.280433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.540 [2024-11-20 16:52:03.303091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.540 [2024-11-20 16:52:03.303118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:18.540 [2024-11-20 16:52:03.303127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.644 ms 00:25:18.540 [2024-11-20 16:52:03.303134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.540 [2024-11-20 16:52:03.325444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.540 [2024-11-20 16:52:03.325470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:18.540 [2024-11-20 16:52:03.325480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.272 ms 00:25:18.540 [2024-11-20 16:52:03.325488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.540 [2024-11-20 16:52:03.347370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.540 [2024-11-20 16:52:03.347401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:18.540 [2024-11-20 16:52:03.347412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.852 ms 00:25:18.540 [2024-11-20 16:52:03.347419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.540 [2024-11-20 16:52:03.368957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.540 [2024-11-20 16:52:03.368981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:18.540 [2024-11-20 16:52:03.368991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.487 ms 00:25:18.540 [2024-11-20 16:52:03.368999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.540 [2024-11-20 16:52:03.369027] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:18.540 [2024-11-20 16:52:03.369040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129536 / 261120 wr_cnt: 1 state: open 00:25:18.540 [2024-11-20 16:52:03.369050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:18.540 [2024-11-20 16:52:03.369058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:18.540 [2024-11-20 16:52:03.369066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:18.540 [2024-11-20 16:52:03.369073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:18.540 [2024-11-20 16:52:03.369081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:18.540 [2024-11-20 16:52:03.369088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:18.540 [2024-11-20 16:52:03.369096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:18.540 [2024-11-20 16:52:03.369103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:18.540 [2024-11-20 16:52:03.369110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:18.540 [2024-11-20 16:52:03.369118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:18.540 [2024-11-20 16:52:03.369125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:18.540 [2024-11-20 16:52:03.369133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:18.540 [2024-11-20 16:52:03.369140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:18.540 [2024-11-20 16:52:03.369147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:18.540 [2024-11-20 16:52:03.369155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:18.540 [2024-11-20 16:52:03.369162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:18.540 [2024-11-20 16:52:03.369170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:18.540 [2024-11-20 16:52:03.369178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:18.540 [2024-11-20 16:52:03.369185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:18.540 [2024-11-20 16:52:03.369192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:18.540 [2024-11-20 16:52:03.369199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:18.540 [2024-11-20 16:52:03.369206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:18.540 [2024-11-20 16:52:03.369213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:18.540 [2024-11-20 16:52:03.369220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:18.540 [2024-11-20 16:52:03.369227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:18.540 [2024-11-20 16:52:03.369234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:18.540 [2024-11-20 16:52:03.369241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:18.540 [2024-11-20 16:52:03.369254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:18.540 [2024-11-20 16:52:03.369262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:18.540 [2024-11-20 16:52:03.369269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:18.540 [2024-11-20 16:52:03.369276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:18.540 [2024-11-20 16:52:03.369284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:18.540 [2024-11-20 16:52:03.369291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:18.540 [2024-11-20 16:52:03.369298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:18.541 [2024-11-20 16:52:03.369915] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:18.541 [2024-11-20 16:52:03.369922] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0afc11df-403e-4ed8-837d-a5655901f1fc 00:25:18.541 [2024-11-20 16:52:03.369930] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129536 00:25:18.541 [2024-11-20 16:52:03.369940] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 130496 00:25:18.541 [2024-11-20 16:52:03.369952] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129536 00:25:18.541 [2024-11-20 16:52:03.369960] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:25:18.541 [2024-11-20 16:52:03.369967] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:18.541 [2024-11-20 16:52:03.369974] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:18.541 [2024-11-20 16:52:03.369981] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:18.541 [2024-11-20 16:52:03.369987] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:18.541 [2024-11-20 16:52:03.369994] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:18.542 [2024-11-20 16:52:03.370000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.542 [2024-11-20 16:52:03.370008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:18.542 [2024-11-20 16:52:03.370016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:25:18.542 [2024-11-20 16:52:03.370022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.542 [2024-11-20 16:52:03.382031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.542 [2024-11-20 16:52:03.382053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:18.542 [2024-11-20 16:52:03.382063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.994 ms 00:25:18.542 [2024-11-20 16:52:03.382071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.542 [2024-11-20 16:52:03.382420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.542 [2024-11-20 16:52:03.382429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:18.542 [2024-11-20 16:52:03.382437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:25:18.542 [2024-11-20 16:52:03.382444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.542 [2024-11-20 16:52:03.414565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.542 [2024-11-20 16:52:03.414593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:18.542 [2024-11-20 16:52:03.414603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.542 [2024-11-20 16:52:03.414611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.542 [2024-11-20 16:52:03.414663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.542 [2024-11-20 16:52:03.414672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:18.542 [2024-11-20 16:52:03.414680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.542 [2024-11-20 16:52:03.414687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.542 [2024-11-20 16:52:03.414735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.542 [2024-11-20 16:52:03.414744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:18.542 [2024-11-20 16:52:03.414751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.542 [2024-11-20 16:52:03.414758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.542 [2024-11-20 16:52:03.414773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.542 [2024-11-20 16:52:03.414780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:18.542 [2024-11-20 16:52:03.414787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.542 [2024-11-20 16:52:03.414794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.799 [2024-11-20 16:52:03.489895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.799 [2024-11-20 16:52:03.489930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:18.799 [2024-11-20 16:52:03.489940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.799 [2024-11-20 16:52:03.489947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.799 [2024-11-20 16:52:03.551017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.799 [2024-11-20 16:52:03.551053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:18.800 [2024-11-20 16:52:03.551063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.800 [2024-11-20 16:52:03.551071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.800 [2024-11-20 16:52:03.551133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.800 [2024-11-20 16:52:03.551142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:18.800 [2024-11-20 16:52:03.551150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.800 [2024-11-20 16:52:03.551157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.800 [2024-11-20 16:52:03.551195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.800 [2024-11-20 16:52:03.551203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:18.800 [2024-11-20 16:52:03.551211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.800 [2024-11-20 16:52:03.551218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.800 [2024-11-20 16:52:03.551300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.800 [2024-11-20 16:52:03.551312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:18.800 [2024-11-20 16:52:03.551320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.800 [2024-11-20 16:52:03.551327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.800 [2024-11-20 16:52:03.551352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.800 [2024-11-20 16:52:03.551361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:18.800 [2024-11-20 16:52:03.551368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.800 [2024-11-20 16:52:03.551375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.800 [2024-11-20 16:52:03.551425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.800 [2024-11-20 16:52:03.551435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:18.800 [2024-11-20 16:52:03.551443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.800 [2024-11-20 16:52:03.551450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.800 [2024-11-20 16:52:03.551487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.800 [2024-11-20 16:52:03.551496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:18.800 [2024-11-20 16:52:03.551503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.800 [2024-11-20 16:52:03.551510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.800 [2024-11-20 16:52:03.551617] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 409.019 ms, result 0 00:25:20.174 00:25:20.174 00:25:20.174 16:52:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:22.703 16:52:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:22.703 [2024-11-20 16:52:07.261895] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:25:22.703 [2024-11-20 16:52:07.262031] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79509 ] 00:25:22.703 [2024-11-20 16:52:07.423077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.703 [2024-11-20 16:52:07.522904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.961 [2024-11-20 16:52:07.773177] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:22.961 [2024-11-20 16:52:07.773240] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:23.221 [2024-11-20 16:52:07.927069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.221 [2024-11-20 16:52:07.927116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:23.221 [2024-11-20 16:52:07.927133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:23.221 [2024-11-20 16:52:07.927141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.221 [2024-11-20 16:52:07.927187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.221 [2024-11-20 16:52:07.927199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:23.221 [2024-11-20 16:52:07.927209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:25:23.221 [2024-11-20 16:52:07.927216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.221 [2024-11-20 16:52:07.927234] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:23.221 [2024-11-20 16:52:07.927959] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:23.221 [2024-11-20 16:52:07.927982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.221 [2024-11-20 16:52:07.927990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:23.221 [2024-11-20 16:52:07.927999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:25:23.221 [2024-11-20 16:52:07.928006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.221 [2024-11-20 16:52:07.929050] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:23.221 [2024-11-20 16:52:07.941002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.221 [2024-11-20 16:52:07.941039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:23.221 [2024-11-20 16:52:07.941050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.954 ms 00:25:23.221 [2024-11-20 16:52:07.941059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.221 [2024-11-20 16:52:07.941112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.221 [2024-11-20 16:52:07.941121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:23.221 [2024-11-20 16:52:07.941129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:25:23.221 [2024-11-20 16:52:07.941136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.221 [2024-11-20 16:52:07.945897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.221 [2024-11-20 16:52:07.945933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:23.221 [2024-11-20 16:52:07.945947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.707 ms 00:25:23.221 [2024-11-20 16:52:07.945954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.221 [2024-11-20 16:52:07.946031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.221 [2024-11-20 16:52:07.946040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:23.221 [2024-11-20 16:52:07.946048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:23.221 [2024-11-20 16:52:07.946055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.221 [2024-11-20 16:52:07.946099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.221 [2024-11-20 16:52:07.946108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:23.221 [2024-11-20 16:52:07.946115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:23.221 [2024-11-20 16:52:07.946122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.221 [2024-11-20 16:52:07.946142] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:23.221 [2024-11-20 16:52:07.949352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.221 [2024-11-20 16:52:07.949387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:23.221 [2024-11-20 16:52:07.949396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.214 ms 00:25:23.221 [2024-11-20 16:52:07.949406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.221 [2024-11-20 16:52:07.949433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.221 [2024-11-20 16:52:07.949441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:23.221 [2024-11-20 16:52:07.949449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:23.221 [2024-11-20 16:52:07.949456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.221 [2024-11-20 16:52:07.949475] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:23.221 [2024-11-20 16:52:07.949493] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:23.221 [2024-11-20 16:52:07.949527] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:23.221 [2024-11-20 16:52:07.949543] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:23.221 [2024-11-20 16:52:07.949645] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:23.221 [2024-11-20 16:52:07.949656] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:23.221 [2024-11-20 16:52:07.949666] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:23.221 [2024-11-20 16:52:07.949676] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:23.221 [2024-11-20 16:52:07.949684] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:23.221 [2024-11-20 16:52:07.949692] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:23.221 [2024-11-20 16:52:07.949699] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:23.221 [2024-11-20 16:52:07.949705] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:23.221 [2024-11-20 16:52:07.949712] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:23.221 [2024-11-20 16:52:07.949721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.221 [2024-11-20 16:52:07.949729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:23.221 [2024-11-20 16:52:07.949736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:25:23.221 [2024-11-20 16:52:07.949743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.221 [2024-11-20 16:52:07.949824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.221 [2024-11-20 16:52:07.949832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:23.221 [2024-11-20 16:52:07.949839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:25:23.221 [2024-11-20 16:52:07.949846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.221 [2024-11-20 16:52:07.949945] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:23.222 [2024-11-20 16:52:07.949956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:23.222 [2024-11-20 16:52:07.949964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:23.222 [2024-11-20 16:52:07.949971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:23.222 [2024-11-20 16:52:07.949979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:23.222 [2024-11-20 16:52:07.949985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:23.222 [2024-11-20 16:52:07.949994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:23.222 [2024-11-20 16:52:07.950001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:23.222 [2024-11-20 16:52:07.950008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:23.222 [2024-11-20 16:52:07.950015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:23.222 [2024-11-20 16:52:07.950021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:23.222 [2024-11-20 16:52:07.950027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:23.222 [2024-11-20 16:52:07.950033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:23.222 [2024-11-20 16:52:07.950040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:23.222 [2024-11-20 16:52:07.950047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:23.222 [2024-11-20 16:52:07.950058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:23.222 [2024-11-20 16:52:07.950064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:23.222 [2024-11-20 16:52:07.950071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:23.222 [2024-11-20 16:52:07.950077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:23.222 [2024-11-20 16:52:07.950084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:23.222 [2024-11-20 16:52:07.950090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:23.222 [2024-11-20 16:52:07.950097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:23.222 [2024-11-20 16:52:07.950103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:23.222 [2024-11-20 16:52:07.950109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:23.222 [2024-11-20 16:52:07.950116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:23.222 [2024-11-20 16:52:07.950122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:23.222 [2024-11-20 16:52:07.950128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:23.222 [2024-11-20 16:52:07.950134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:23.222 [2024-11-20 16:52:07.950141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:23.222 [2024-11-20 16:52:07.950147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:23.222 [2024-11-20 16:52:07.950153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:23.222 [2024-11-20 16:52:07.950159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:23.222 [2024-11-20 16:52:07.950166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:23.222 [2024-11-20 16:52:07.950172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:23.222 [2024-11-20 16:52:07.950179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:23.222 [2024-11-20 16:52:07.950185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:23.222 [2024-11-20 16:52:07.950191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:23.222 [2024-11-20 16:52:07.950198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:23.222 [2024-11-20 16:52:07.950205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:23.222 [2024-11-20 16:52:07.950211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:23.222 [2024-11-20 16:52:07.950218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:23.222 [2024-11-20 16:52:07.950224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:23.222 [2024-11-20 16:52:07.950230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:23.222 [2024-11-20 16:52:07.950236] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:23.222 [2024-11-20 16:52:07.950244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:23.222 [2024-11-20 16:52:07.950250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:23.222 [2024-11-20 16:52:07.950257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:23.222 [2024-11-20 16:52:07.950264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:23.222 [2024-11-20 16:52:07.950270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:23.222 [2024-11-20 16:52:07.950277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:23.222 [2024-11-20 16:52:07.950283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:23.222 [2024-11-20 16:52:07.950289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:23.222 [2024-11-20 16:52:07.950296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:23.222 [2024-11-20 16:52:07.950304] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:23.222 [2024-11-20 16:52:07.950313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:23.222 [2024-11-20 16:52:07.950321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:23.222 [2024-11-20 16:52:07.950328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:23.222 [2024-11-20 16:52:07.950335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:23.222 [2024-11-20 16:52:07.950342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:23.222 [2024-11-20 16:52:07.950349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:23.222 [2024-11-20 16:52:07.950355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:23.222 [2024-11-20 16:52:07.950362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:23.222 [2024-11-20 16:52:07.950369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:23.222 [2024-11-20 16:52:07.950387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:23.222 [2024-11-20 16:52:07.950395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:23.222 [2024-11-20 16:52:07.950402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:23.222 [2024-11-20 16:52:07.950409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:23.222 [2024-11-20 16:52:07.950416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:23.222 [2024-11-20 16:52:07.950423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:23.222 [2024-11-20 16:52:07.950430] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:23.222 [2024-11-20 16:52:07.950442] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:23.222 [2024-11-20 16:52:07.950451] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:23.222 [2024-11-20 16:52:07.950458] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:23.223 [2024-11-20 16:52:07.950465] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:23.223 [2024-11-20 16:52:07.950472] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:23.223 [2024-11-20 16:52:07.950480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.223 [2024-11-20 16:52:07.950487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:23.223 [2024-11-20 16:52:07.950494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.603 ms 00:25:23.223 [2024-11-20 16:52:07.950501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.223 [2024-11-20 16:52:07.975988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.223 [2024-11-20 16:52:07.976022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:23.223 [2024-11-20 16:52:07.976033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.436 ms 00:25:23.223 [2024-11-20 16:52:07.976040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.223 [2024-11-20 16:52:07.976126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.223 [2024-11-20 16:52:07.976133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:23.223 [2024-11-20 16:52:07.976141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:25:23.223 [2024-11-20 16:52:07.976149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.223 [2024-11-20 16:52:08.024900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.223 [2024-11-20 16:52:08.024940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:23.223 [2024-11-20 16:52:08.024952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.703 ms 00:25:23.223 [2024-11-20 16:52:08.024960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.223 [2024-11-20 16:52:08.025000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.223 [2024-11-20 16:52:08.025011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:23.223 [2024-11-20 16:52:08.025019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:23.223 [2024-11-20 16:52:08.025030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.223 [2024-11-20 16:52:08.025397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.223 [2024-11-20 16:52:08.025420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:23.223 [2024-11-20 16:52:08.025429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:25:23.223 [2024-11-20 16:52:08.025437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.223 [2024-11-20 16:52:08.025554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.223 [2024-11-20 16:52:08.025567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:23.223 [2024-11-20 16:52:08.025576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:25:23.223 [2024-11-20 16:52:08.025588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.223 [2024-11-20 16:52:08.038474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.223 [2024-11-20 16:52:08.038504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:23.223 [2024-11-20 16:52:08.038516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.866 ms 00:25:23.223 [2024-11-20 16:52:08.038523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.223 [2024-11-20 16:52:08.050696] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:23.223 [2024-11-20 16:52:08.050731] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:23.223 [2024-11-20 16:52:08.050743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.223 [2024-11-20 16:52:08.050752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:23.223 [2024-11-20 16:52:08.050762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.134 ms 00:25:23.223 [2024-11-20 16:52:08.050769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.223 [2024-11-20 16:52:08.074975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.223 [2024-11-20 16:52:08.075025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:23.223 [2024-11-20 16:52:08.075036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.169 ms 00:25:23.223 [2024-11-20 16:52:08.075045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.223 [2024-11-20 16:52:08.086434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.223 [2024-11-20 16:52:08.086471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:23.223 [2024-11-20 16:52:08.086480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.353 ms 00:25:23.223 [2024-11-20 16:52:08.086487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.223 [2024-11-20 16:52:08.097460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.223 [2024-11-20 16:52:08.097490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:23.223 [2024-11-20 16:52:08.097500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.942 ms 00:25:23.223 [2024-11-20 16:52:08.097507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.223 [2024-11-20 16:52:08.098101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.223 [2024-11-20 16:52:08.098121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:23.223 [2024-11-20 16:52:08.098130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:25:23.223 [2024-11-20 16:52:08.098140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.482 [2024-11-20 16:52:08.151827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.482 [2024-11-20 16:52:08.151874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:23.482 [2024-11-20 16:52:08.151892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.669 ms 00:25:23.482 [2024-11-20 16:52:08.151899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.482 [2024-11-20 16:52:08.162018] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:23.482 [2024-11-20 16:52:08.164187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.482 [2024-11-20 16:52:08.164215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:23.482 [2024-11-20 16:52:08.164226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.247 ms 00:25:23.482 [2024-11-20 16:52:08.164236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.482 [2024-11-20 16:52:08.164321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.482 [2024-11-20 16:52:08.164332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:23.482 [2024-11-20 16:52:08.164341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:23.482 [2024-11-20 16:52:08.164352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.482 [2024-11-20 16:52:08.165757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.482 [2024-11-20 16:52:08.165789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:23.482 [2024-11-20 16:52:08.165799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.355 ms 00:25:23.482 [2024-11-20 16:52:08.165806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.482 [2024-11-20 16:52:08.165830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.482 [2024-11-20 16:52:08.165839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:23.482 [2024-11-20 16:52:08.165846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:23.482 [2024-11-20 16:52:08.165854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.482 [2024-11-20 16:52:08.165887] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:23.482 [2024-11-20 16:52:08.165898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.482 [2024-11-20 16:52:08.165906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:23.482 [2024-11-20 16:52:08.165913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:23.482 [2024-11-20 16:52:08.165920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.482 [2024-11-20 16:52:08.188833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.482 [2024-11-20 16:52:08.188869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:23.482 [2024-11-20 16:52:08.188880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.897 ms 00:25:23.482 [2024-11-20 16:52:08.188892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.482 [2024-11-20 16:52:08.188961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.482 [2024-11-20 16:52:08.188970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:23.482 [2024-11-20 16:52:08.188978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:23.482 [2024-11-20 16:52:08.188985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.482 [2024-11-20 16:52:08.189918] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 262.447 ms, result 0 00:25:24.860  [2024-11-20T16:52:10.679Z] Copying: 932/1048576 [kB] (932 kBps) [2024-11-20T16:52:11.613Z] Copying: 5064/1048576 [kB] (4132 kBps) [2024-11-20T16:52:12.546Z] Copying: 49/1024 [MB] (44 MBps) [2024-11-20T16:52:13.479Z] Copying: 102/1024 [MB] (53 MBps) [2024-11-20T16:52:14.411Z] Copying: 153/1024 [MB] (50 MBps) [2024-11-20T16:52:15.784Z] Copying: 206/1024 [MB] (53 MBps) [2024-11-20T16:52:16.716Z] Copying: 259/1024 [MB] (52 MBps) [2024-11-20T16:52:17.649Z] Copying: 310/1024 [MB] (51 MBps) [2024-11-20T16:52:18.582Z] Copying: 364/1024 [MB] (53 MBps) [2024-11-20T16:52:19.515Z] Copying: 423/1024 [MB] (59 MBps) [2024-11-20T16:52:20.448Z] Copying: 477/1024 [MB] (54 MBps) [2024-11-20T16:52:21.382Z] Copying: 530/1024 [MB] (53 MBps) [2024-11-20T16:52:22.757Z] Copying: 582/1024 [MB] (52 MBps) [2024-11-20T16:52:23.688Z] Copying: 635/1024 [MB] (52 MBps) [2024-11-20T16:52:24.618Z] Copying: 684/1024 [MB] (49 MBps) [2024-11-20T16:52:25.548Z] Copying: 738/1024 [MB] (53 MBps) [2024-11-20T16:52:26.480Z] Copying: 792/1024 [MB] (54 MBps) [2024-11-20T16:52:27.413Z] Copying: 847/1024 [MB] (55 MBps) [2024-11-20T16:52:28.783Z] Copying: 899/1024 [MB] (51 MBps) [2024-11-20T16:52:29.744Z] Copying: 952/1024 [MB] (53 MBps) [2024-11-20T16:52:30.004Z] Copying: 1002/1024 [MB] (49 MBps) [2024-11-20T16:52:30.578Z] Copying: 1024/1024 [MB] (average 47 MBps)[2024-11-20 16:52:30.297659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.692 [2024-11-20 16:52:30.297721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:45.692 [2024-11-20 16:52:30.297739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:45.692 [2024-11-20 16:52:30.297748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.692 [2024-11-20 16:52:30.297769] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:45.692 [2024-11-20 16:52:30.300882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.692 [2024-11-20 16:52:30.300915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:45.692 [2024-11-20 16:52:30.300925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.098 ms 00:25:45.692 [2024-11-20 16:52:30.300933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.692 [2024-11-20 16:52:30.301152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.692 [2024-11-20 16:52:30.301168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:45.692 [2024-11-20 16:52:30.301180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:25:45.692 [2024-11-20 16:52:30.301188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.692 [2024-11-20 16:52:30.311056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.692 [2024-11-20 16:52:30.311088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:45.692 [2024-11-20 16:52:30.311099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.853 ms 00:25:45.692 [2024-11-20 16:52:30.311106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.692 [2024-11-20 16:52:30.317633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.692 [2024-11-20 16:52:30.317662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:45.692 [2024-11-20 16:52:30.317672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.502 ms 00:25:45.692 [2024-11-20 16:52:30.317685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.692 [2024-11-20 16:52:30.342154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.692 [2024-11-20 16:52:30.342201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:45.692 [2024-11-20 16:52:30.342213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.431 ms 00:25:45.692 [2024-11-20 16:52:30.342220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.692 [2024-11-20 16:52:30.355626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.692 [2024-11-20 16:52:30.355665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:45.692 [2024-11-20 16:52:30.355678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.367 ms 00:25:45.692 [2024-11-20 16:52:30.355687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.692 [2024-11-20 16:52:30.359784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.692 [2024-11-20 16:52:30.359814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:45.692 [2024-11-20 16:52:30.359823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.047 ms 00:25:45.692 [2024-11-20 16:52:30.359832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.692 [2024-11-20 16:52:30.383350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.692 [2024-11-20 16:52:30.383392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:45.692 [2024-11-20 16:52:30.383403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.500 ms 00:25:45.692 [2024-11-20 16:52:30.383410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.692 [2024-11-20 16:52:30.406590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.692 [2024-11-20 16:52:30.406621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:45.692 [2024-11-20 16:52:30.406639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.148 ms 00:25:45.692 [2024-11-20 16:52:30.406646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.692 [2024-11-20 16:52:30.429519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.692 [2024-11-20 16:52:30.429569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:45.692 [2024-11-20 16:52:30.429578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.843 ms 00:25:45.692 [2024-11-20 16:52:30.429586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.692 [2024-11-20 16:52:30.451810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.692 [2024-11-20 16:52:30.451841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:45.692 [2024-11-20 16:52:30.451850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.173 ms 00:25:45.692 [2024-11-20 16:52:30.451857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.692 [2024-11-20 16:52:30.451885] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:45.692 [2024-11-20 16:52:30.451898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:45.692 [2024-11-20 16:52:30.451908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:45.692 [2024-11-20 16:52:30.451917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:45.692 [2024-11-20 16:52:30.451924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:45.692 [2024-11-20 16:52:30.451932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:45.692 [2024-11-20 16:52:30.451939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:45.692 [2024-11-20 16:52:30.451947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:45.692 [2024-11-20 16:52:30.451955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:45.692 [2024-11-20 16:52:30.451962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:45.692 [2024-11-20 16:52:30.451970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:45.692 [2024-11-20 16:52:30.451977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:45.692 [2024-11-20 16:52:30.451985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:45.692 [2024-11-20 16:52:30.451993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:45.692 [2024-11-20 16:52:30.452000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:45.692 [2024-11-20 16:52:30.452007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:45.692 [2024-11-20 16:52:30.452015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:45.692 [2024-11-20 16:52:30.452022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:45.692 [2024-11-20 16:52:30.452029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:45.692 [2024-11-20 16:52:30.452037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:45.693 [2024-11-20 16:52:30.452663] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:45.693 [2024-11-20 16:52:30.452670] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0afc11df-403e-4ed8-837d-a5655901f1fc 00:25:45.693 [2024-11-20 16:52:30.452677] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:45.693 [2024-11-20 16:52:30.452684] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 135104 00:25:45.693 [2024-11-20 16:52:30.452691] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 133120 00:25:45.693 [2024-11-20 16:52:30.452703] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0149 00:25:45.693 [2024-11-20 16:52:30.452710] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:45.694 [2024-11-20 16:52:30.452717] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:45.694 [2024-11-20 16:52:30.452725] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:45.694 [2024-11-20 16:52:30.452736] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:45.694 [2024-11-20 16:52:30.452743] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:45.694 [2024-11-20 16:52:30.452750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.694 [2024-11-20 16:52:30.452757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:45.694 [2024-11-20 16:52:30.452765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.866 ms 00:25:45.694 [2024-11-20 16:52:30.452772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.694 [2024-11-20 16:52:30.464677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.694 [2024-11-20 16:52:30.464709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:45.694 [2024-11-20 16:52:30.464719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.889 ms 00:25:45.694 [2024-11-20 16:52:30.464726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.694 [2024-11-20 16:52:30.465060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.694 [2024-11-20 16:52:30.465075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:45.694 [2024-11-20 16:52:30.465083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:25:45.694 [2024-11-20 16:52:30.465090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.694 [2024-11-20 16:52:30.497348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.694 [2024-11-20 16:52:30.497396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:45.694 [2024-11-20 16:52:30.497405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.694 [2024-11-20 16:52:30.497413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.694 [2024-11-20 16:52:30.497463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.694 [2024-11-20 16:52:30.497472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:45.694 [2024-11-20 16:52:30.497479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.694 [2024-11-20 16:52:30.497486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.694 [2024-11-20 16:52:30.497535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.694 [2024-11-20 16:52:30.497548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:45.694 [2024-11-20 16:52:30.497555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.694 [2024-11-20 16:52:30.497562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.694 [2024-11-20 16:52:30.497591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.694 [2024-11-20 16:52:30.497599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:45.694 [2024-11-20 16:52:30.497606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.694 [2024-11-20 16:52:30.497613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.954 [2024-11-20 16:52:30.574839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.954 [2024-11-20 16:52:30.574883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:45.954 [2024-11-20 16:52:30.574894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.954 [2024-11-20 16:52:30.574902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.954 [2024-11-20 16:52:30.637866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.954 [2024-11-20 16:52:30.637912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:45.954 [2024-11-20 16:52:30.637922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.954 [2024-11-20 16:52:30.637930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.954 [2024-11-20 16:52:30.637981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.954 [2024-11-20 16:52:30.637989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:45.955 [2024-11-20 16:52:30.638001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.955 [2024-11-20 16:52:30.638009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.955 [2024-11-20 16:52:30.638060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.955 [2024-11-20 16:52:30.638070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:45.955 [2024-11-20 16:52:30.638078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.955 [2024-11-20 16:52:30.638085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.955 [2024-11-20 16:52:30.638170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.955 [2024-11-20 16:52:30.638180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:45.955 [2024-11-20 16:52:30.638188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.955 [2024-11-20 16:52:30.638197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.955 [2024-11-20 16:52:30.638226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.955 [2024-11-20 16:52:30.638234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:45.955 [2024-11-20 16:52:30.638242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.955 [2024-11-20 16:52:30.638249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.955 [2024-11-20 16:52:30.638281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.955 [2024-11-20 16:52:30.638289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:45.955 [2024-11-20 16:52:30.638297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.955 [2024-11-20 16:52:30.638306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.955 [2024-11-20 16:52:30.638343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.955 [2024-11-20 16:52:30.638352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:45.955 [2024-11-20 16:52:30.638360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.955 [2024-11-20 16:52:30.638366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.955 [2024-11-20 16:52:30.638491] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 340.806 ms, result 0 00:25:46.524 00:25:46.524 00:25:46.524 16:52:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:48.465 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:48.465 16:52:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:48.465 [2024-11-20 16:52:33.023270] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:25:48.465 [2024-11-20 16:52:33.023367] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79779 ] 00:25:48.465 [2024-11-20 16:52:33.178565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.465 [2024-11-20 16:52:33.279155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:48.726 [2024-11-20 16:52:33.531808] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:48.726 [2024-11-20 16:52:33.531871] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:48.989 [2024-11-20 16:52:33.690286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.989 [2024-11-20 16:52:33.690339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:48.989 [2024-11-20 16:52:33.690357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:48.989 [2024-11-20 16:52:33.690365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.989 [2024-11-20 16:52:33.690424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.989 [2024-11-20 16:52:33.690434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:48.989 [2024-11-20 16:52:33.690445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:48.989 [2024-11-20 16:52:33.690452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.989 [2024-11-20 16:52:33.690471] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:48.989 [2024-11-20 16:52:33.691180] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:48.989 [2024-11-20 16:52:33.691207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.989 [2024-11-20 16:52:33.691215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:48.989 [2024-11-20 16:52:33.691223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:25:48.989 [2024-11-20 16:52:33.691230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.989 [2024-11-20 16:52:33.692301] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:48.989 [2024-11-20 16:52:33.704629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.989 [2024-11-20 16:52:33.704666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:48.989 [2024-11-20 16:52:33.704677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.329 ms 00:25:48.989 [2024-11-20 16:52:33.704685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.989 [2024-11-20 16:52:33.704747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.989 [2024-11-20 16:52:33.704757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:48.989 [2024-11-20 16:52:33.704765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:25:48.989 [2024-11-20 16:52:33.704772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.989 [2024-11-20 16:52:33.709492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.989 [2024-11-20 16:52:33.709520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:48.989 [2024-11-20 16:52:33.709529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.676 ms 00:25:48.989 [2024-11-20 16:52:33.709536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.989 [2024-11-20 16:52:33.709605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.989 [2024-11-20 16:52:33.709615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:48.989 [2024-11-20 16:52:33.709623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:25:48.989 [2024-11-20 16:52:33.709630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.989 [2024-11-20 16:52:33.709675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.989 [2024-11-20 16:52:33.709684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:48.989 [2024-11-20 16:52:33.709692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:48.989 [2024-11-20 16:52:33.709699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.989 [2024-11-20 16:52:33.709720] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:48.989 [2024-11-20 16:52:33.712894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.989 [2024-11-20 16:52:33.712921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:48.989 [2024-11-20 16:52:33.712932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.180 ms 00:25:48.989 [2024-11-20 16:52:33.712942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.989 [2024-11-20 16:52:33.712969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.989 [2024-11-20 16:52:33.712977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:48.989 [2024-11-20 16:52:33.712985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:48.989 [2024-11-20 16:52:33.712992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.989 [2024-11-20 16:52:33.713011] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:48.989 [2024-11-20 16:52:33.713029] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:48.989 [2024-11-20 16:52:33.713063] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:48.989 [2024-11-20 16:52:33.713085] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:48.989 [2024-11-20 16:52:33.713187] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:48.989 [2024-11-20 16:52:33.713204] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:48.989 [2024-11-20 16:52:33.713214] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:48.989 [2024-11-20 16:52:33.713224] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:48.989 [2024-11-20 16:52:33.713233] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:48.989 [2024-11-20 16:52:33.713240] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:48.989 [2024-11-20 16:52:33.713248] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:48.990 [2024-11-20 16:52:33.713256] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:48.990 [2024-11-20 16:52:33.713262] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:48.990 [2024-11-20 16:52:33.713272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.990 [2024-11-20 16:52:33.713279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:48.990 [2024-11-20 16:52:33.713286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:25:48.990 [2024-11-20 16:52:33.713292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.990 [2024-11-20 16:52:33.713375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.990 [2024-11-20 16:52:33.713397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:48.990 [2024-11-20 16:52:33.713405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:25:48.990 [2024-11-20 16:52:33.713411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.990 [2024-11-20 16:52:33.713511] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:48.990 [2024-11-20 16:52:33.713534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:48.990 [2024-11-20 16:52:33.713543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:48.990 [2024-11-20 16:52:33.713550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:48.990 [2024-11-20 16:52:33.713558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:48.990 [2024-11-20 16:52:33.713564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:48.990 [2024-11-20 16:52:33.713572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:48.990 [2024-11-20 16:52:33.713579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:48.990 [2024-11-20 16:52:33.713586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:48.990 [2024-11-20 16:52:33.713592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:48.990 [2024-11-20 16:52:33.713598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:48.990 [2024-11-20 16:52:33.713605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:48.990 [2024-11-20 16:52:33.713612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:48.990 [2024-11-20 16:52:33.713618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:48.990 [2024-11-20 16:52:33.713625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:48.990 [2024-11-20 16:52:33.713636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:48.990 [2024-11-20 16:52:33.713643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:48.990 [2024-11-20 16:52:33.713649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:48.990 [2024-11-20 16:52:33.713655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:48.990 [2024-11-20 16:52:33.713662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:48.990 [2024-11-20 16:52:33.713668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:48.990 [2024-11-20 16:52:33.713675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:48.990 [2024-11-20 16:52:33.713681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:48.990 [2024-11-20 16:52:33.713688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:48.990 [2024-11-20 16:52:33.713694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:48.990 [2024-11-20 16:52:33.713700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:48.990 [2024-11-20 16:52:33.713706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:48.990 [2024-11-20 16:52:33.713713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:48.990 [2024-11-20 16:52:33.713719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:48.990 [2024-11-20 16:52:33.713725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:48.990 [2024-11-20 16:52:33.713731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:48.990 [2024-11-20 16:52:33.713737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:48.990 [2024-11-20 16:52:33.713744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:48.990 [2024-11-20 16:52:33.713750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:48.990 [2024-11-20 16:52:33.713756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:48.990 [2024-11-20 16:52:33.713763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:48.990 [2024-11-20 16:52:33.713769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:48.990 [2024-11-20 16:52:33.713775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:48.990 [2024-11-20 16:52:33.713782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:48.990 [2024-11-20 16:52:33.713789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:48.990 [2024-11-20 16:52:33.713795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:48.990 [2024-11-20 16:52:33.713801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:48.990 [2024-11-20 16:52:33.713807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:48.990 [2024-11-20 16:52:33.713814] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:48.990 [2024-11-20 16:52:33.713821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:48.990 [2024-11-20 16:52:33.713828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:48.990 [2024-11-20 16:52:33.713835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:48.990 [2024-11-20 16:52:33.713842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:48.990 [2024-11-20 16:52:33.713849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:48.990 [2024-11-20 16:52:33.713855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:48.990 [2024-11-20 16:52:33.713862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:48.990 [2024-11-20 16:52:33.713868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:48.990 [2024-11-20 16:52:33.713875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:48.990 [2024-11-20 16:52:33.713883] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:48.990 [2024-11-20 16:52:33.713891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:48.990 [2024-11-20 16:52:33.713899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:48.990 [2024-11-20 16:52:33.713906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:48.990 [2024-11-20 16:52:33.713913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:48.990 [2024-11-20 16:52:33.713920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:48.990 [2024-11-20 16:52:33.713926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:48.990 [2024-11-20 16:52:33.713933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:48.990 [2024-11-20 16:52:33.713939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:48.990 [2024-11-20 16:52:33.713946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:48.990 [2024-11-20 16:52:33.713953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:48.990 [2024-11-20 16:52:33.713960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:48.990 [2024-11-20 16:52:33.713966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:48.990 [2024-11-20 16:52:33.713973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:48.990 [2024-11-20 16:52:33.713980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:48.990 [2024-11-20 16:52:33.713987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:48.990 [2024-11-20 16:52:33.713994] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:48.990 [2024-11-20 16:52:33.714008] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:48.990 [2024-11-20 16:52:33.714016] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:48.990 [2024-11-20 16:52:33.714023] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:48.990 [2024-11-20 16:52:33.714030] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:48.990 [2024-11-20 16:52:33.714036] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:48.990 [2024-11-20 16:52:33.714044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.990 [2024-11-20 16:52:33.714051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:48.990 [2024-11-20 16:52:33.714058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.600 ms 00:25:48.990 [2024-11-20 16:52:33.714064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.990 [2024-11-20 16:52:33.739626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.990 [2024-11-20 16:52:33.739660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:48.990 [2024-11-20 16:52:33.739670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.510 ms 00:25:48.990 [2024-11-20 16:52:33.739677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.990 [2024-11-20 16:52:33.739760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.990 [2024-11-20 16:52:33.739769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:48.990 [2024-11-20 16:52:33.739777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:25:48.990 [2024-11-20 16:52:33.739784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.990 [2024-11-20 16:52:33.777062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.991 [2024-11-20 16:52:33.777102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:48.991 [2024-11-20 16:52:33.777114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.226 ms 00:25:48.991 [2024-11-20 16:52:33.777122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.991 [2024-11-20 16:52:33.777164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.991 [2024-11-20 16:52:33.777173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:48.991 [2024-11-20 16:52:33.777182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:48.991 [2024-11-20 16:52:33.777192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.991 [2024-11-20 16:52:33.777558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.991 [2024-11-20 16:52:33.777580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:48.991 [2024-11-20 16:52:33.777589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:25:48.991 [2024-11-20 16:52:33.777596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.991 [2024-11-20 16:52:33.777718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.991 [2024-11-20 16:52:33.777726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:48.991 [2024-11-20 16:52:33.777734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:25:48.991 [2024-11-20 16:52:33.777742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.991 [2024-11-20 16:52:33.790636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.991 [2024-11-20 16:52:33.790665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:48.991 [2024-11-20 16:52:33.790674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.870 ms 00:25:48.991 [2024-11-20 16:52:33.790684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.991 [2024-11-20 16:52:33.803552] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:48.991 [2024-11-20 16:52:33.803586] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:48.991 [2024-11-20 16:52:33.803597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.991 [2024-11-20 16:52:33.803605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:48.991 [2024-11-20 16:52:33.803614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.807 ms 00:25:48.991 [2024-11-20 16:52:33.803621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.991 [2024-11-20 16:52:33.828643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.991 [2024-11-20 16:52:33.828697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:48.991 [2024-11-20 16:52:33.828710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.983 ms 00:25:48.991 [2024-11-20 16:52:33.828718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.991 [2024-11-20 16:52:33.841866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.991 [2024-11-20 16:52:33.841908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:48.991 [2024-11-20 16:52:33.841920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.091 ms 00:25:48.991 [2024-11-20 16:52:33.841927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.991 [2024-11-20 16:52:33.854445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.991 [2024-11-20 16:52:33.854483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:48.991 [2024-11-20 16:52:33.854494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.479 ms 00:25:48.991 [2024-11-20 16:52:33.854501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.991 [2024-11-20 16:52:33.855114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.991 [2024-11-20 16:52:33.855134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:48.991 [2024-11-20 16:52:33.855144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:25:48.991 [2024-11-20 16:52:33.855156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.250 [2024-11-20 16:52:33.910427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.250 [2024-11-20 16:52:33.910482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:49.250 [2024-11-20 16:52:33.910499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.249 ms 00:25:49.250 [2024-11-20 16:52:33.910507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.250 [2024-11-20 16:52:33.921411] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:49.250 [2024-11-20 16:52:33.923888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.250 [2024-11-20 16:52:33.923917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:49.250 [2024-11-20 16:52:33.923929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.334 ms 00:25:49.250 [2024-11-20 16:52:33.923939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.250 [2024-11-20 16:52:33.924034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.250 [2024-11-20 16:52:33.924046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:49.250 [2024-11-20 16:52:33.924055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:49.250 [2024-11-20 16:52:33.924067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.250 [2024-11-20 16:52:33.924653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.250 [2024-11-20 16:52:33.924678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:49.250 [2024-11-20 16:52:33.924689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:25:49.250 [2024-11-20 16:52:33.924698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.250 [2024-11-20 16:52:33.924721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.250 [2024-11-20 16:52:33.924730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:49.250 [2024-11-20 16:52:33.924739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:49.250 [2024-11-20 16:52:33.924747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.250 [2024-11-20 16:52:33.924780] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:49.250 [2024-11-20 16:52:33.924793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.251 [2024-11-20 16:52:33.924803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:49.251 [2024-11-20 16:52:33.924811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:49.251 [2024-11-20 16:52:33.924819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.251 [2024-11-20 16:52:33.947652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.251 [2024-11-20 16:52:33.947688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:49.251 [2024-11-20 16:52:33.947700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.816 ms 00:25:49.251 [2024-11-20 16:52:33.947711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.251 [2024-11-20 16:52:33.947778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.251 [2024-11-20 16:52:33.947788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:49.251 [2024-11-20 16:52:33.947797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:49.251 [2024-11-20 16:52:33.947804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.251 [2024-11-20 16:52:33.948666] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 257.961 ms, result 0 00:25:50.634  [2024-11-20T16:52:36.459Z] Copying: 28/1024 [MB] (28 MBps) [2024-11-20T16:52:37.400Z] Copying: 49/1024 [MB] (21 MBps) [2024-11-20T16:52:38.341Z] Copying: 65/1024 [MB] (15 MBps) [2024-11-20T16:52:39.287Z] Copying: 81/1024 [MB] (15 MBps) [2024-11-20T16:52:40.277Z] Copying: 103/1024 [MB] (22 MBps) [2024-11-20T16:52:41.230Z] Copying: 117/1024 [MB] (13 MBps) [2024-11-20T16:52:42.174Z] Copying: 129852/1048576 [kB] (9996 kBps) [2024-11-20T16:52:43.558Z] Copying: 140/1024 [MB] (13 MBps) [2024-11-20T16:52:44.515Z] Copying: 156/1024 [MB] (16 MBps) [2024-11-20T16:52:45.457Z] Copying: 167/1024 [MB] (10 MBps) [2024-11-20T16:52:46.402Z] Copying: 181112/1048576 [kB] (9952 kBps) [2024-11-20T16:52:47.342Z] Copying: 190/1024 [MB] (14 MBps) [2024-11-20T16:52:48.284Z] Copying: 205340/1048576 [kB] (9812 kBps) [2024-11-20T16:52:49.242Z] Copying: 215196/1048576 [kB] (9856 kBps) [2024-11-20T16:52:50.198Z] Copying: 225028/1048576 [kB] (9832 kBps) [2024-11-20T16:52:51.582Z] Copying: 229/1024 [MB] (10 MBps) [2024-11-20T16:52:52.513Z] Copying: 243/1024 [MB] (13 MBps) [2024-11-20T16:52:53.452Z] Copying: 277/1024 [MB] (34 MBps) [2024-11-20T16:52:54.395Z] Copying: 303/1024 [MB] (25 MBps) [2024-11-20T16:52:55.338Z] Copying: 320/1024 [MB] (17 MBps) [2024-11-20T16:52:56.274Z] Copying: 338/1024 [MB] (17 MBps) [2024-11-20T16:52:57.216Z] Copying: 367/1024 [MB] (29 MBps) [2024-11-20T16:52:58.163Z] Copying: 389/1024 [MB] (22 MBps) [2024-11-20T16:52:59.552Z] Copying: 402/1024 [MB] (13 MBps) [2024-11-20T16:53:00.180Z] Copying: 420/1024 [MB] (17 MBps) [2024-11-20T16:53:01.565Z] Copying: 434/1024 [MB] (13 MBps) [2024-11-20T16:53:02.504Z] Copying: 453/1024 [MB] (19 MBps) [2024-11-20T16:53:03.450Z] Copying: 476/1024 [MB] (22 MBps) [2024-11-20T16:53:04.392Z] Copying: 491/1024 [MB] (15 MBps) [2024-11-20T16:53:05.337Z] Copying: 503/1024 [MB] (11 MBps) [2024-11-20T16:53:06.281Z] Copying: 521/1024 [MB] (17 MBps) [2024-11-20T16:53:07.225Z] Copying: 531/1024 [MB] (10 MBps) [2024-11-20T16:53:08.169Z] Copying: 541/1024 [MB] (10 MBps) [2024-11-20T16:53:09.564Z] Copying: 551/1024 [MB] (10 MBps) [2024-11-20T16:53:10.511Z] Copying: 561/1024 [MB] (10 MBps) [2024-11-20T16:53:11.454Z] Copying: 572/1024 [MB] (10 MBps) [2024-11-20T16:53:12.447Z] Copying: 583/1024 [MB] (11 MBps) [2024-11-20T16:53:13.389Z] Copying: 594/1024 [MB] (10 MBps) [2024-11-20T16:53:14.329Z] Copying: 605/1024 [MB] (10 MBps) [2024-11-20T16:53:15.274Z] Copying: 615/1024 [MB] (10 MBps) [2024-11-20T16:53:16.217Z] Copying: 627/1024 [MB] (11 MBps) [2024-11-20T16:53:17.160Z] Copying: 640/1024 [MB] (13 MBps) [2024-11-20T16:53:18.549Z] Copying: 652/1024 [MB] (12 MBps) [2024-11-20T16:53:19.491Z] Copying: 663/1024 [MB] (10 MBps) [2024-11-20T16:53:20.432Z] Copying: 675/1024 [MB] (12 MBps) [2024-11-20T16:53:21.376Z] Copying: 687/1024 [MB] (11 MBps) [2024-11-20T16:53:22.321Z] Copying: 697/1024 [MB] (10 MBps) [2024-11-20T16:53:23.263Z] Copying: 708/1024 [MB] (11 MBps) [2024-11-20T16:53:24.205Z] Copying: 720/1024 [MB] (11 MBps) [2024-11-20T16:53:25.149Z] Copying: 733/1024 [MB] (12 MBps) [2024-11-20T16:53:26.536Z] Copying: 747/1024 [MB] (14 MBps) [2024-11-20T16:53:27.479Z] Copying: 758/1024 [MB] (11 MBps) [2024-11-20T16:53:28.422Z] Copying: 769/1024 [MB] (10 MBps) [2024-11-20T16:53:29.362Z] Copying: 781/1024 [MB] (11 MBps) [2024-11-20T16:53:30.327Z] Copying: 792/1024 [MB] (11 MBps) [2024-11-20T16:53:31.270Z] Copying: 803/1024 [MB] (10 MBps) [2024-11-20T16:53:32.212Z] Copying: 814/1024 [MB] (10 MBps) [2024-11-20T16:53:33.153Z] Copying: 824/1024 [MB] (10 MBps) [2024-11-20T16:53:34.543Z] Copying: 838/1024 [MB] (14 MBps) [2024-11-20T16:53:35.487Z] Copying: 850/1024 [MB] (11 MBps) [2024-11-20T16:53:36.430Z] Copying: 861/1024 [MB] (10 MBps) [2024-11-20T16:53:37.370Z] Copying: 872/1024 [MB] (11 MBps) [2024-11-20T16:53:38.302Z] Copying: 891/1024 [MB] (19 MBps) [2024-11-20T16:53:39.240Z] Copying: 934/1024 [MB] (42 MBps) [2024-11-20T16:53:40.171Z] Copying: 981/1024 [MB] (46 MBps) [2024-11-20T16:53:40.171Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-20 16:53:40.165741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.285 [2024-11-20 16:53:40.165804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:55.285 [2024-11-20 16:53:40.165818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:55.285 [2024-11-20 16:53:40.165825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.285 [2024-11-20 16:53:40.165846] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:55.285 [2024-11-20 16:53:40.168432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.285 [2024-11-20 16:53:40.168464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:55.285 [2024-11-20 16:53:40.168480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.571 ms 00:26:55.285 [2024-11-20 16:53:40.168488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.285 [2024-11-20 16:53:40.168702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.285 [2024-11-20 16:53:40.168718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:55.285 [2024-11-20 16:53:40.168727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:26:55.285 [2024-11-20 16:53:40.168734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.545 [2024-11-20 16:53:40.172188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.545 [2024-11-20 16:53:40.172210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:55.545 [2024-11-20 16:53:40.172219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.430 ms 00:26:55.545 [2024-11-20 16:53:40.172228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.545 [2024-11-20 16:53:40.180059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.545 [2024-11-20 16:53:40.180100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:55.545 [2024-11-20 16:53:40.180111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.810 ms 00:26:55.545 [2024-11-20 16:53:40.180119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.545 [2024-11-20 16:53:40.205137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.545 [2024-11-20 16:53:40.205177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:55.545 [2024-11-20 16:53:40.205189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.953 ms 00:26:55.545 [2024-11-20 16:53:40.205197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.545 [2024-11-20 16:53:40.219142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.545 [2024-11-20 16:53:40.219179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:55.545 [2024-11-20 16:53:40.219193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.921 ms 00:26:55.545 [2024-11-20 16:53:40.219200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.545 [2024-11-20 16:53:40.220946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.545 [2024-11-20 16:53:40.220985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:55.545 [2024-11-20 16:53:40.220995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.720 ms 00:26:55.545 [2024-11-20 16:53:40.221002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.545 [2024-11-20 16:53:40.244251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.545 [2024-11-20 16:53:40.244286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:55.545 [2024-11-20 16:53:40.244297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.234 ms 00:26:55.545 [2024-11-20 16:53:40.244305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.545 [2024-11-20 16:53:40.266678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.545 [2024-11-20 16:53:40.266718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:55.545 [2024-11-20 16:53:40.266729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.353 ms 00:26:55.545 [2024-11-20 16:53:40.266736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.545 [2024-11-20 16:53:40.288772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.545 [2024-11-20 16:53:40.288806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:55.545 [2024-11-20 16:53:40.288816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.016 ms 00:26:55.545 [2024-11-20 16:53:40.288823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.545 [2024-11-20 16:53:40.310793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.545 [2024-11-20 16:53:40.310829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:55.545 [2024-11-20 16:53:40.310839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.925 ms 00:26:55.545 [2024-11-20 16:53:40.310847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.545 [2024-11-20 16:53:40.310867] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:55.545 [2024-11-20 16:53:40.310881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:55.545 [2024-11-20 16:53:40.310896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:55.545 [2024-11-20 16:53:40.310904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:55.545 [2024-11-20 16:53:40.310912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:55.545 [2024-11-20 16:53:40.310919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:55.545 [2024-11-20 16:53:40.310927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:55.545 [2024-11-20 16:53:40.310934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:55.545 [2024-11-20 16:53:40.310942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:55.545 [2024-11-20 16:53:40.310949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:55.545 [2024-11-20 16:53:40.310957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:55.545 [2024-11-20 16:53:40.310964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:55.545 [2024-11-20 16:53:40.310972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:55.545 [2024-11-20 16:53:40.310980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.310988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.310995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:55.546 [2024-11-20 16:53:40.311646] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:55.546 [2024-11-20 16:53:40.311656] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0afc11df-403e-4ed8-837d-a5655901f1fc 00:26:55.546 [2024-11-20 16:53:40.311663] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:55.547 [2024-11-20 16:53:40.311670] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:55.547 [2024-11-20 16:53:40.311677] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:55.547 [2024-11-20 16:53:40.311685] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:55.547 [2024-11-20 16:53:40.311691] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:55.547 [2024-11-20 16:53:40.311698] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:55.547 [2024-11-20 16:53:40.311711] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:55.547 [2024-11-20 16:53:40.311717] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:55.547 [2024-11-20 16:53:40.311723] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:55.547 [2024-11-20 16:53:40.311730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.547 [2024-11-20 16:53:40.311738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:55.547 [2024-11-20 16:53:40.311746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.865 ms 00:26:55.547 [2024-11-20 16:53:40.311753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.547 [2024-11-20 16:53:40.323773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.547 [2024-11-20 16:53:40.323804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:55.547 [2024-11-20 16:53:40.323815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.004 ms 00:26:55.547 [2024-11-20 16:53:40.323823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.547 [2024-11-20 16:53:40.324161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.547 [2024-11-20 16:53:40.324176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:55.547 [2024-11-20 16:53:40.324188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:26:55.547 [2024-11-20 16:53:40.324196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.547 [2024-11-20 16:53:40.356308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.547 [2024-11-20 16:53:40.356345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:55.547 [2024-11-20 16:53:40.356356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.547 [2024-11-20 16:53:40.356363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.547 [2024-11-20 16:53:40.356434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.547 [2024-11-20 16:53:40.356443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:55.547 [2024-11-20 16:53:40.356453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.547 [2024-11-20 16:53:40.356461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.547 [2024-11-20 16:53:40.356516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.547 [2024-11-20 16:53:40.356526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:55.547 [2024-11-20 16:53:40.356534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.547 [2024-11-20 16:53:40.356540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.547 [2024-11-20 16:53:40.356554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.547 [2024-11-20 16:53:40.356562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:55.547 [2024-11-20 16:53:40.356569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.547 [2024-11-20 16:53:40.356579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.806 [2024-11-20 16:53:40.435887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.806 [2024-11-20 16:53:40.435944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:55.806 [2024-11-20 16:53:40.435957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.806 [2024-11-20 16:53:40.435966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.806 [2024-11-20 16:53:40.499856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.806 [2024-11-20 16:53:40.499906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:55.806 [2024-11-20 16:53:40.499917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.806 [2024-11-20 16:53:40.499932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.806 [2024-11-20 16:53:40.500001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.807 [2024-11-20 16:53:40.500011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:55.807 [2024-11-20 16:53:40.500019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.807 [2024-11-20 16:53:40.500027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.807 [2024-11-20 16:53:40.500060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.807 [2024-11-20 16:53:40.500069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:55.807 [2024-11-20 16:53:40.500076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.807 [2024-11-20 16:53:40.500083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.807 [2024-11-20 16:53:40.500170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.807 [2024-11-20 16:53:40.500180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:55.807 [2024-11-20 16:53:40.500188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.807 [2024-11-20 16:53:40.500195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.807 [2024-11-20 16:53:40.500223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.807 [2024-11-20 16:53:40.500232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:55.807 [2024-11-20 16:53:40.500239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.807 [2024-11-20 16:53:40.500246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.807 [2024-11-20 16:53:40.500281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.807 [2024-11-20 16:53:40.500290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:55.807 [2024-11-20 16:53:40.500298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.807 [2024-11-20 16:53:40.500305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.807 [2024-11-20 16:53:40.500341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.807 [2024-11-20 16:53:40.500350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:55.807 [2024-11-20 16:53:40.500358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.807 [2024-11-20 16:53:40.500365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.807 [2024-11-20 16:53:40.500493] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 334.727 ms, result 0 00:26:56.372 00:26:56.372 00:26:56.372 16:53:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:58.903 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:58.903 16:53:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:58.903 16:53:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:58.903 16:53:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:58.903 16:53:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:58.903 16:53:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:58.903 16:53:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:58.903 16:53:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:58.903 Process with pid 78522 is not found 00:26:58.903 16:53:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 78522 00:26:58.903 16:53:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 78522 ']' 00:26:58.903 16:53:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 78522 00:26:58.903 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78522) - No such process 00:26:58.903 16:53:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 78522 is not found' 00:26:58.903 16:53:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:58.903 Remove shared memory files 00:26:58.903 16:53:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:58.903 16:53:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:58.903 16:53:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:58.903 16:53:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:58.903 16:53:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:26:58.903 16:53:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:58.903 16:53:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:58.903 ************************************ 00:26:58.903 END TEST ftl_dirty_shutdown 00:26:58.903 ************************************ 00:26:58.903 00:26:58.903 real 3m6.007s 00:26:58.903 user 3m23.573s 00:26:58.903 sys 0m23.399s 00:26:58.903 16:53:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:58.903 16:53:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:59.162 16:53:43 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:59.162 16:53:43 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:59.162 16:53:43 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:59.162 16:53:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:59.162 ************************************ 00:26:59.162 START TEST ftl_upgrade_shutdown 00:26:59.162 ************************************ 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:59.162 * Looking for test storage... 00:26:59.162 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:59.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.162 --rc genhtml_branch_coverage=1 00:26:59.162 --rc genhtml_function_coverage=1 00:26:59.162 --rc genhtml_legend=1 00:26:59.162 --rc geninfo_all_blocks=1 00:26:59.162 --rc geninfo_unexecuted_blocks=1 00:26:59.162 00:26:59.162 ' 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:59.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.162 --rc genhtml_branch_coverage=1 00:26:59.162 --rc genhtml_function_coverage=1 00:26:59.162 --rc genhtml_legend=1 00:26:59.162 --rc geninfo_all_blocks=1 00:26:59.162 --rc geninfo_unexecuted_blocks=1 00:26:59.162 00:26:59.162 ' 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:59.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.162 --rc genhtml_branch_coverage=1 00:26:59.162 --rc genhtml_function_coverage=1 00:26:59.162 --rc genhtml_legend=1 00:26:59.162 --rc geninfo_all_blocks=1 00:26:59.162 --rc geninfo_unexecuted_blocks=1 00:26:59.162 00:26:59.162 ' 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:59.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.162 --rc genhtml_branch_coverage=1 00:26:59.162 --rc genhtml_function_coverage=1 00:26:59.162 --rc genhtml_legend=1 00:26:59.162 --rc geninfo_all_blocks=1 00:26:59.162 --rc geninfo_unexecuted_blocks=1 00:26:59.162 00:26:59.162 ' 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:59.162 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:26:59.163 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:26:59.163 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:26:59.163 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:26:59.163 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:26:59.163 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:26:59.163 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:26:59.163 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:26:59.163 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:26:59.163 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:26:59.163 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:26:59.163 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:26:59.163 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:26:59.163 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:59.163 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:59.163 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:59.163 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80571 00:26:59.163 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:26:59.163 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:59.163 16:53:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80571 00:26:59.163 16:53:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80571 ']' 00:26:59.163 16:53:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:59.163 16:53:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:59.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:59.163 16:53:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:59.163 16:53:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:59.163 16:53:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:59.421 [2024-11-20 16:53:44.047559] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:26:59.421 [2024-11-20 16:53:44.047670] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80571 ] 00:26:59.421 [2024-11-20 16:53:44.207374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.679 [2024-11-20 16:53:44.306278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.245 16:53:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:00.245 16:53:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:00.245 16:53:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:00.245 16:53:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:27:00.245 16:53:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:27:00.245 16:53:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:00.245 16:53:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:27:00.245 16:53:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:00.245 16:53:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:27:00.245 16:53:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:00.245 16:53:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:27:00.245 16:53:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:00.245 16:53:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:27:00.245 16:53:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:00.245 16:53:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:27:00.245 16:53:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:00.245 16:53:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:27:00.245 16:53:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:27:00.245 16:53:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:27:00.245 16:53:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:00.245 16:53:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:27:00.245 16:53:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:00.245 16:53:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:27:00.569 16:53:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:27:00.569 16:53:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:00.569 16:53:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:27:00.569 16:53:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:27:00.569 16:53:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:00.569 16:53:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:00.570 16:53:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:00.570 16:53:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:27:00.570 16:53:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:00.570 { 00:27:00.570 "name": "basen1", 00:27:00.570 "aliases": [ 00:27:00.570 "0ba2da22-ee4f-436c-a3cd-2dc0b2132d89" 00:27:00.570 ], 00:27:00.570 "product_name": "NVMe disk", 00:27:00.570 "block_size": 4096, 00:27:00.570 "num_blocks": 1310720, 00:27:00.570 "uuid": "0ba2da22-ee4f-436c-a3cd-2dc0b2132d89", 00:27:00.570 "numa_id": -1, 00:27:00.570 "assigned_rate_limits": { 00:27:00.570 "rw_ios_per_sec": 0, 00:27:00.570 "rw_mbytes_per_sec": 0, 00:27:00.570 "r_mbytes_per_sec": 0, 00:27:00.570 "w_mbytes_per_sec": 0 00:27:00.570 }, 00:27:00.570 "claimed": true, 00:27:00.570 "claim_type": "read_many_write_one", 00:27:00.570 "zoned": false, 00:27:00.570 "supported_io_types": { 00:27:00.570 "read": true, 00:27:00.570 "write": true, 00:27:00.570 "unmap": true, 00:27:00.570 "flush": true, 00:27:00.570 "reset": true, 00:27:00.570 "nvme_admin": true, 00:27:00.570 "nvme_io": true, 00:27:00.570 "nvme_io_md": false, 00:27:00.570 "write_zeroes": true, 00:27:00.570 "zcopy": false, 00:27:00.570 "get_zone_info": false, 00:27:00.570 "zone_management": false, 00:27:00.570 "zone_append": false, 00:27:00.570 "compare": true, 00:27:00.570 "compare_and_write": false, 00:27:00.570 "abort": true, 00:27:00.570 "seek_hole": false, 00:27:00.570 "seek_data": false, 00:27:00.570 "copy": true, 00:27:00.570 "nvme_iov_md": false 00:27:00.570 }, 00:27:00.570 "driver_specific": { 00:27:00.570 "nvme": [ 00:27:00.570 { 00:27:00.570 "pci_address": "0000:00:11.0", 00:27:00.570 "trid": { 00:27:00.570 "trtype": "PCIe", 00:27:00.570 "traddr": "0000:00:11.0" 00:27:00.570 }, 00:27:00.570 "ctrlr_data": { 00:27:00.570 "cntlid": 0, 00:27:00.570 "vendor_id": "0x1b36", 00:27:00.570 "model_number": "QEMU NVMe Ctrl", 00:27:00.570 "serial_number": "12341", 00:27:00.570 "firmware_revision": "8.0.0", 00:27:00.570 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:00.570 "oacs": { 00:27:00.570 "security": 0, 00:27:00.570 "format": 1, 00:27:00.570 "firmware": 0, 00:27:00.570 "ns_manage": 1 00:27:00.570 }, 00:27:00.570 "multi_ctrlr": false, 00:27:00.570 "ana_reporting": false 00:27:00.570 }, 00:27:00.570 "vs": { 00:27:00.570 "nvme_version": "1.4" 00:27:00.570 }, 00:27:00.570 "ns_data": { 00:27:00.570 "id": 1, 00:27:00.570 "can_share": false 00:27:00.570 } 00:27:00.570 } 00:27:00.570 ], 00:27:00.570 "mp_policy": "active_passive" 00:27:00.570 } 00:27:00.570 } 00:27:00.570 ]' 00:27:00.570 16:53:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:00.570 16:53:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:00.570 16:53:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:00.570 16:53:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:00.570 16:53:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:00.570 16:53:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:27:00.570 16:53:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:00.570 16:53:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:27:00.570 16:53:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:00.570 16:53:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:00.570 16:53:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:00.841 16:53:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=d5fba332-12db-4108-88aa-1ea3a09f22c9 00:27:00.841 16:53:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:00.841 16:53:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d5fba332-12db-4108-88aa-1ea3a09f22c9 00:27:01.099 16:53:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:27:01.099 16:53:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=ad3a925a-cb74-4730-bceb-bb394639f4cd 00:27:01.099 16:53:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u ad3a925a-cb74-4730-bceb-bb394639f4cd 00:27:01.357 16:53:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=4687de56-6def-4dc9-a3e7-61a8cff69623 00:27:01.357 16:53:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 4687de56-6def-4dc9-a3e7-61a8cff69623 ]] 00:27:01.357 16:53:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 4687de56-6def-4dc9-a3e7-61a8cff69623 5120 00:27:01.357 16:53:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:27:01.357 16:53:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:01.357 16:53:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=4687de56-6def-4dc9-a3e7-61a8cff69623 00:27:01.357 16:53:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:27:01.357 16:53:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 4687de56-6def-4dc9-a3e7-61a8cff69623 00:27:01.357 16:53:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=4687de56-6def-4dc9-a3e7-61a8cff69623 00:27:01.357 16:53:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:01.357 16:53:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:01.357 16:53:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:01.357 16:53:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4687de56-6def-4dc9-a3e7-61a8cff69623 00:27:01.616 16:53:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:01.616 { 00:27:01.616 "name": "4687de56-6def-4dc9-a3e7-61a8cff69623", 00:27:01.616 "aliases": [ 00:27:01.616 "lvs/basen1p0" 00:27:01.616 ], 00:27:01.616 "product_name": "Logical Volume", 00:27:01.616 "block_size": 4096, 00:27:01.616 "num_blocks": 5242880, 00:27:01.616 "uuid": "4687de56-6def-4dc9-a3e7-61a8cff69623", 00:27:01.616 "assigned_rate_limits": { 00:27:01.616 "rw_ios_per_sec": 0, 00:27:01.616 "rw_mbytes_per_sec": 0, 00:27:01.616 "r_mbytes_per_sec": 0, 00:27:01.616 "w_mbytes_per_sec": 0 00:27:01.616 }, 00:27:01.616 "claimed": false, 00:27:01.616 "zoned": false, 00:27:01.616 "supported_io_types": { 00:27:01.616 "read": true, 00:27:01.616 "write": true, 00:27:01.616 "unmap": true, 00:27:01.616 "flush": false, 00:27:01.616 "reset": true, 00:27:01.616 "nvme_admin": false, 00:27:01.616 "nvme_io": false, 00:27:01.616 "nvme_io_md": false, 00:27:01.616 "write_zeroes": true, 00:27:01.616 "zcopy": false, 00:27:01.616 "get_zone_info": false, 00:27:01.616 "zone_management": false, 00:27:01.616 "zone_append": false, 00:27:01.616 "compare": false, 00:27:01.616 "compare_and_write": false, 00:27:01.616 "abort": false, 00:27:01.616 "seek_hole": true, 00:27:01.616 "seek_data": true, 00:27:01.616 "copy": false, 00:27:01.616 "nvme_iov_md": false 00:27:01.616 }, 00:27:01.616 "driver_specific": { 00:27:01.616 "lvol": { 00:27:01.616 "lvol_store_uuid": "ad3a925a-cb74-4730-bceb-bb394639f4cd", 00:27:01.616 "base_bdev": "basen1", 00:27:01.616 "thin_provision": true, 00:27:01.616 "num_allocated_clusters": 0, 00:27:01.616 "snapshot": false, 00:27:01.616 "clone": false, 00:27:01.616 "esnap_clone": false 00:27:01.616 } 00:27:01.616 } 00:27:01.616 } 00:27:01.616 ]' 00:27:01.616 16:53:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:01.616 16:53:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:01.616 16:53:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:01.616 16:53:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:27:01.616 16:53:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:27:01.616 16:53:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:27:01.616 16:53:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:27:01.616 16:53:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:01.616 16:53:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:27:01.875 16:53:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:01.875 16:53:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:01.875 16:53:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:02.134 16:53:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:02.134 16:53:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:02.134 16:53:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 4687de56-6def-4dc9-a3e7-61a8cff69623 -c cachen1p0 --l2p_dram_limit 2 00:27:02.393 [2024-11-20 16:53:47.116797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.393 [2024-11-20 16:53:47.116841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:02.394 [2024-11-20 16:53:47.116854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:02.394 [2024-11-20 16:53:47.116860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.394 [2024-11-20 16:53:47.116906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.394 [2024-11-20 16:53:47.116913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:02.394 [2024-11-20 16:53:47.116921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:27:02.394 [2024-11-20 16:53:47.116927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.394 [2024-11-20 16:53:47.116943] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:02.394 [2024-11-20 16:53:47.117543] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:02.394 [2024-11-20 16:53:47.117560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.394 [2024-11-20 16:53:47.117566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:02.394 [2024-11-20 16:53:47.117575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.618 ms 00:27:02.394 [2024-11-20 16:53:47.117581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.394 [2024-11-20 16:53:47.117673] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID f6a6a3bc-f32b-4fce-8012-5f5288180d34 00:27:02.394 [2024-11-20 16:53:47.118695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.394 [2024-11-20 16:53:47.118714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:02.394 [2024-11-20 16:53:47.118721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:27:02.394 [2024-11-20 16:53:47.118729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.394 [2024-11-20 16:53:47.123638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.394 [2024-11-20 16:53:47.123663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:02.394 [2024-11-20 16:53:47.123673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.868 ms 00:27:02.394 [2024-11-20 16:53:47.123681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.394 [2024-11-20 16:53:47.123712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.394 [2024-11-20 16:53:47.123720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:02.394 [2024-11-20 16:53:47.123727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:02.394 [2024-11-20 16:53:47.123735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.394 [2024-11-20 16:53:47.123764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.394 [2024-11-20 16:53:47.123773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:02.394 [2024-11-20 16:53:47.123779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:02.394 [2024-11-20 16:53:47.123790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.394 [2024-11-20 16:53:47.123806] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:02.394 [2024-11-20 16:53:47.126735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.394 [2024-11-20 16:53:47.126760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:02.394 [2024-11-20 16:53:47.126770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.931 ms 00:27:02.394 [2024-11-20 16:53:47.126776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.394 [2024-11-20 16:53:47.126796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.394 [2024-11-20 16:53:47.126803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:02.394 [2024-11-20 16:53:47.126811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:02.394 [2024-11-20 16:53:47.126817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.394 [2024-11-20 16:53:47.126837] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:02.394 [2024-11-20 16:53:47.126944] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:02.394 [2024-11-20 16:53:47.126957] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:02.394 [2024-11-20 16:53:47.126966] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:02.394 [2024-11-20 16:53:47.126975] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:02.394 [2024-11-20 16:53:47.126982] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:02.394 [2024-11-20 16:53:47.126989] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:02.394 [2024-11-20 16:53:47.126995] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:02.394 [2024-11-20 16:53:47.127003] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:02.394 [2024-11-20 16:53:47.127009] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:02.394 [2024-11-20 16:53:47.127016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.394 [2024-11-20 16:53:47.127022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:02.394 [2024-11-20 16:53:47.127030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.180 ms 00:27:02.394 [2024-11-20 16:53:47.127036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.394 [2024-11-20 16:53:47.127101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.394 [2024-11-20 16:53:47.127107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:02.394 [2024-11-20 16:53:47.127116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:27:02.394 [2024-11-20 16:53:47.127126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.394 [2024-11-20 16:53:47.127207] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:02.394 [2024-11-20 16:53:47.127214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:02.394 [2024-11-20 16:53:47.127222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:02.394 [2024-11-20 16:53:47.127229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:02.394 [2024-11-20 16:53:47.127236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:02.394 [2024-11-20 16:53:47.127241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:02.394 [2024-11-20 16:53:47.127248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:02.394 [2024-11-20 16:53:47.127253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:02.394 [2024-11-20 16:53:47.127260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:02.394 [2024-11-20 16:53:47.127265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:02.394 [2024-11-20 16:53:47.127272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:02.394 [2024-11-20 16:53:47.127278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:02.394 [2024-11-20 16:53:47.127284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:02.394 [2024-11-20 16:53:47.127289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:02.394 [2024-11-20 16:53:47.127298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:02.394 [2024-11-20 16:53:47.127304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:02.394 [2024-11-20 16:53:47.127312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:02.394 [2024-11-20 16:53:47.127317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:02.394 [2024-11-20 16:53:47.127325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:02.394 [2024-11-20 16:53:47.127330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:02.394 [2024-11-20 16:53:47.127337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:02.394 [2024-11-20 16:53:47.127342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:02.394 [2024-11-20 16:53:47.127348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:02.394 [2024-11-20 16:53:47.127354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:02.394 [2024-11-20 16:53:47.127360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:02.394 [2024-11-20 16:53:47.127365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:02.394 [2024-11-20 16:53:47.127372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:02.394 [2024-11-20 16:53:47.127385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:02.394 [2024-11-20 16:53:47.127392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:02.394 [2024-11-20 16:53:47.127397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:02.394 [2024-11-20 16:53:47.127403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:02.394 [2024-11-20 16:53:47.127409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:02.394 [2024-11-20 16:53:47.127417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:02.394 [2024-11-20 16:53:47.127422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:02.394 [2024-11-20 16:53:47.127428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:02.394 [2024-11-20 16:53:47.127433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:02.394 [2024-11-20 16:53:47.127439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:02.394 [2024-11-20 16:53:47.127444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:02.394 [2024-11-20 16:53:47.127451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:02.394 [2024-11-20 16:53:47.127456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:02.394 [2024-11-20 16:53:47.127463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:02.394 [2024-11-20 16:53:47.127468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:02.394 [2024-11-20 16:53:47.127474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:02.394 [2024-11-20 16:53:47.127479] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:02.394 [2024-11-20 16:53:47.127486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:02.394 [2024-11-20 16:53:47.127492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:02.394 [2024-11-20 16:53:47.127501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:02.395 [2024-11-20 16:53:47.127508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:02.395 [2024-11-20 16:53:47.127516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:02.395 [2024-11-20 16:53:47.127522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:02.395 [2024-11-20 16:53:47.127530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:02.395 [2024-11-20 16:53:47.127535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:02.395 [2024-11-20 16:53:47.127541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:02.395 [2024-11-20 16:53:47.127549] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:02.395 [2024-11-20 16:53:47.127558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:02.395 [2024-11-20 16:53:47.127566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:02.395 [2024-11-20 16:53:47.127573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:02.395 [2024-11-20 16:53:47.127579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:02.395 [2024-11-20 16:53:47.127585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:02.395 [2024-11-20 16:53:47.127591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:02.395 [2024-11-20 16:53:47.127598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:02.395 [2024-11-20 16:53:47.127603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:02.395 [2024-11-20 16:53:47.127610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:02.395 [2024-11-20 16:53:47.127615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:02.395 [2024-11-20 16:53:47.127624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:02.395 [2024-11-20 16:53:47.127629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:02.395 [2024-11-20 16:53:47.127636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:02.395 [2024-11-20 16:53:47.127642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:02.395 [2024-11-20 16:53:47.127650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:02.395 [2024-11-20 16:53:47.127655] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:02.395 [2024-11-20 16:53:47.127663] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:02.395 [2024-11-20 16:53:47.127669] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:02.395 [2024-11-20 16:53:47.127676] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:02.395 [2024-11-20 16:53:47.127681] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:02.395 [2024-11-20 16:53:47.127688] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:02.395 [2024-11-20 16:53:47.127695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.395 [2024-11-20 16:53:47.127702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:02.395 [2024-11-20 16:53:47.127707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.543 ms 00:27:02.395 [2024-11-20 16:53:47.127715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.395 [2024-11-20 16:53:47.127757] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:02.395 [2024-11-20 16:53:47.127768] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:04.986 [2024-11-20 16:53:49.419241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.986 [2024-11-20 16:53:49.419296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:04.986 [2024-11-20 16:53:49.419311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2291.476 ms 00:27:04.986 [2024-11-20 16:53:49.419322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.986 [2024-11-20 16:53:49.444650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.986 [2024-11-20 16:53:49.444693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:04.986 [2024-11-20 16:53:49.444705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.115 ms 00:27:04.986 [2024-11-20 16:53:49.444715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.986 [2024-11-20 16:53:49.444794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.986 [2024-11-20 16:53:49.444806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:04.986 [2024-11-20 16:53:49.444815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:04.986 [2024-11-20 16:53:49.444825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.986 [2024-11-20 16:53:49.475298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.986 [2024-11-20 16:53:49.475340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:04.986 [2024-11-20 16:53:49.475352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.434 ms 00:27:04.986 [2024-11-20 16:53:49.475361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.986 [2024-11-20 16:53:49.475407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.986 [2024-11-20 16:53:49.475421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:04.986 [2024-11-20 16:53:49.475430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:04.986 [2024-11-20 16:53:49.475440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.986 [2024-11-20 16:53:49.475789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.986 [2024-11-20 16:53:49.475807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:04.986 [2024-11-20 16:53:49.475816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.295 ms 00:27:04.986 [2024-11-20 16:53:49.475826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.986 [2024-11-20 16:53:49.475872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.986 [2024-11-20 16:53:49.475882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:04.986 [2024-11-20 16:53:49.475892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:27:04.986 [2024-11-20 16:53:49.475902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.986 [2024-11-20 16:53:49.489851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.986 [2024-11-20 16:53:49.489882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:04.986 [2024-11-20 16:53:49.489892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.929 ms 00:27:04.986 [2024-11-20 16:53:49.489900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.986 [2024-11-20 16:53:49.501671] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:04.986 [2024-11-20 16:53:49.502489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.986 [2024-11-20 16:53:49.502513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:04.986 [2024-11-20 16:53:49.502524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.515 ms 00:27:04.986 [2024-11-20 16:53:49.502531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.986 [2024-11-20 16:53:49.533776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.986 [2024-11-20 16:53:49.533810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:04.986 [2024-11-20 16:53:49.533826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.218 ms 00:27:04.986 [2024-11-20 16:53:49.533834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.986 [2024-11-20 16:53:49.533903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.986 [2024-11-20 16:53:49.533914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:04.986 [2024-11-20 16:53:49.533927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:27:04.986 [2024-11-20 16:53:49.533935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.986 [2024-11-20 16:53:49.556254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.986 [2024-11-20 16:53:49.556282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:04.986 [2024-11-20 16:53:49.556295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.274 ms 00:27:04.986 [2024-11-20 16:53:49.556303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.986 [2024-11-20 16:53:49.578878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.986 [2024-11-20 16:53:49.578903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:04.986 [2024-11-20 16:53:49.578914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.548 ms 00:27:04.986 [2024-11-20 16:53:49.578921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.986 [2024-11-20 16:53:49.579471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.986 [2024-11-20 16:53:49.579485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:04.986 [2024-11-20 16:53:49.579495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.529 ms 00:27:04.986 [2024-11-20 16:53:49.579502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.986 [2024-11-20 16:53:49.646192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.986 [2024-11-20 16:53:49.646229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:04.986 [2024-11-20 16:53:49.646245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 66.652 ms 00:27:04.986 [2024-11-20 16:53:49.646254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.986 [2024-11-20 16:53:49.670083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.986 [2024-11-20 16:53:49.670117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:04.986 [2024-11-20 16:53:49.670136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.770 ms 00:27:04.986 [2024-11-20 16:53:49.670144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.986 [2024-11-20 16:53:49.693239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.986 [2024-11-20 16:53:49.693272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:27:04.986 [2024-11-20 16:53:49.693285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.068 ms 00:27:04.986 [2024-11-20 16:53:49.693292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.986 [2024-11-20 16:53:49.716470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.986 [2024-11-20 16:53:49.716508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:04.986 [2024-11-20 16:53:49.716521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.153 ms 00:27:04.986 [2024-11-20 16:53:49.716528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.986 [2024-11-20 16:53:49.716558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.986 [2024-11-20 16:53:49.716566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:04.986 [2024-11-20 16:53:49.716581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:04.986 [2024-11-20 16:53:49.716589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.986 [2024-11-20 16:53:49.716664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.986 [2024-11-20 16:53:49.716674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:04.986 [2024-11-20 16:53:49.716686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:27:04.986 [2024-11-20 16:53:49.716693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.986 [2024-11-20 16:53:49.717564] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2600.339 ms, result 0 00:27:04.986 { 00:27:04.986 "name": "ftl", 00:27:04.986 "uuid": "f6a6a3bc-f32b-4fce-8012-5f5288180d34" 00:27:04.986 } 00:27:04.986 16:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:05.244 [2024-11-20 16:53:49.928960] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:05.244 16:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:05.502 16:53:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:05.502 [2024-11-20 16:53:50.329395] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:05.502 16:53:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:05.760 [2024-11-20 16:53:50.525710] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:05.760 16:53:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:06.019 16:53:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:06.019 16:53:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:06.019 16:53:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:06.019 16:53:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:06.019 16:53:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:06.019 16:53:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:06.019 16:53:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:06.019 16:53:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:06.019 16:53:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:06.019 16:53:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:06.019 Fill FTL, iteration 1 00:27:06.019 16:53:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:06.019 16:53:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:06.019 16:53:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:06.019 16:53:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:06.019 16:53:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:06.019 16:53:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:06.019 16:53:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=80682 00:27:06.019 16:53:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:06.019 16:53:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:06.019 16:53:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 80682 /var/tmp/spdk.tgt.sock 00:27:06.019 16:53:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80682 ']' 00:27:06.019 16:53:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:06.019 16:53:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:06.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:06.019 16:53:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:06.019 16:53:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:06.019 16:53:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:06.278 [2024-11-20 16:53:50.938915] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:27:06.278 [2024-11-20 16:53:50.939030] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80682 ] 00:27:06.278 [2024-11-20 16:53:51.098911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.536 [2024-11-20 16:53:51.200840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.104 16:53:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:07.104 16:53:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:07.104 16:53:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:07.362 ftln1 00:27:07.362 16:53:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:07.362 16:53:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:07.620 16:53:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:27:07.620 16:53:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 80682 00:27:07.620 16:53:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80682 ']' 00:27:07.620 16:53:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 80682 00:27:07.620 16:53:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:27:07.620 16:53:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:07.620 16:53:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80682 00:27:07.620 16:53:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:07.620 16:53:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:07.620 killing process with pid 80682 00:27:07.620 16:53:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80682' 00:27:07.620 16:53:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 80682 00:27:07.620 16:53:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 80682 00:27:08.995 16:53:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:08.995 16:53:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:08.995 [2024-11-20 16:53:53.754269] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:27:08.995 [2024-11-20 16:53:53.754374] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80720 ] 00:27:09.253 [2024-11-20 16:53:53.914257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.253 [2024-11-20 16:53:54.014107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.626  [2024-11-20T16:53:56.448Z] Copying: 219/1024 [MB] (219 MBps) [2024-11-20T16:53:57.382Z] Copying: 465/1024 [MB] (246 MBps) [2024-11-20T16:53:58.757Z] Copying: 539/1024 [MB] (74 MBps) [2024-11-20T16:53:59.324Z] Copying: 799/1024 [MB] (260 MBps) [2024-11-20T16:53:59.889Z] Copying: 1024/1024 [MB] (average 211 MBps) 00:27:15.003 00:27:15.003 Calculate MD5 checksum, iteration 1 00:27:15.003 16:53:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:15.003 16:53:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:15.003 16:53:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:15.003 16:53:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:15.003 16:53:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:15.003 16:53:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:15.003 16:53:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:15.003 16:53:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:15.003 [2024-11-20 16:53:59.826577] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:27:15.003 [2024-11-20 16:53:59.826670] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80785 ] 00:27:15.260 [2024-11-20 16:53:59.974770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.260 [2024-11-20 16:54:00.056632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.635  [2024-11-20T16:54:02.086Z] Copying: 688/1024 [MB] (688 MBps) [2024-11-20T16:54:02.653Z] Copying: 1024/1024 [MB] (average 671 MBps) 00:27:17.767 00:27:17.767 16:54:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:17.767 16:54:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:19.666 16:54:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:19.666 16:54:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=23ae04f08ef09a9577c7c655206ee0a1 00:27:19.666 16:54:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:19.666 16:54:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:19.666 16:54:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:19.666 Fill FTL, iteration 2 00:27:19.666 16:54:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:19.666 16:54:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:19.666 16:54:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:19.666 16:54:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:19.666 16:54:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:19.666 16:54:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:19.666 [2024-11-20 16:54:04.409170] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:27:19.666 [2024-11-20 16:54:04.409464] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80844 ] 00:27:19.924 [2024-11-20 16:54:04.569496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.924 [2024-11-20 16:54:04.668337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.298  [2024-11-20T16:54:07.115Z] Copying: 216/1024 [MB] (216 MBps) [2024-11-20T16:54:08.047Z] Copying: 448/1024 [MB] (232 MBps) [2024-11-20T16:54:09.417Z] Copying: 720/1024 [MB] (272 MBps) [2024-11-20T16:54:09.417Z] Copying: 981/1024 [MB] (261 MBps) [2024-11-20T16:54:09.981Z] Copying: 1024/1024 [MB] (average 245 MBps) 00:27:25.095 00:27:25.095 Calculate MD5 checksum, iteration 2 00:27:25.095 16:54:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:25.095 16:54:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:25.095 16:54:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:25.095 16:54:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:25.095 16:54:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:25.095 16:54:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:25.095 16:54:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:25.095 16:54:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:25.095 [2024-11-20 16:54:09.826740] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:27:25.095 [2024-11-20 16:54:09.827035] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80903 ] 00:27:25.353 [2024-11-20 16:54:09.981920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.353 [2024-11-20 16:54:10.066422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:26.726  [2024-11-20T16:54:12.177Z] Copying: 683/1024 [MB] (683 MBps) [2024-11-20T16:54:13.109Z] Copying: 1024/1024 [MB] (average 685 MBps) 00:27:28.223 00:27:28.223 16:54:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:28.223 16:54:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:30.121 16:54:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:30.121 16:54:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=b2c704c75051911523eb2d69e5e5560d 00:27:30.121 16:54:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:30.121 16:54:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:30.121 16:54:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:30.121 [2024-11-20 16:54:14.709263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.121 [2024-11-20 16:54:14.709480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:30.121 [2024-11-20 16:54:14.709501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:30.121 [2024-11-20 16:54:14.709510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.121 [2024-11-20 16:54:14.709540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.121 [2024-11-20 16:54:14.709550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:30.121 [2024-11-20 16:54:14.709558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:30.121 [2024-11-20 16:54:14.709569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.121 [2024-11-20 16:54:14.709589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.121 [2024-11-20 16:54:14.709596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:30.121 [2024-11-20 16:54:14.709605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:30.121 [2024-11-20 16:54:14.709612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.121 [2024-11-20 16:54:14.709673] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.398 ms, result 0 00:27:30.121 true 00:27:30.121 16:54:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:30.121 { 00:27:30.121 "name": "ftl", 00:27:30.121 "properties": [ 00:27:30.121 { 00:27:30.121 "name": "superblock_version", 00:27:30.121 "value": 5, 00:27:30.121 "read-only": true 00:27:30.121 }, 00:27:30.121 { 00:27:30.121 "name": "base_device", 00:27:30.121 "bands": [ 00:27:30.121 { 00:27:30.121 "id": 0, 00:27:30.121 "state": "FREE", 00:27:30.121 "validity": 0.0 00:27:30.121 }, 00:27:30.121 { 00:27:30.121 "id": 1, 00:27:30.121 "state": "FREE", 00:27:30.121 "validity": 0.0 00:27:30.121 }, 00:27:30.121 { 00:27:30.121 "id": 2, 00:27:30.121 "state": "FREE", 00:27:30.121 "validity": 0.0 00:27:30.121 }, 00:27:30.121 { 00:27:30.121 "id": 3, 00:27:30.121 "state": "FREE", 00:27:30.121 "validity": 0.0 00:27:30.121 }, 00:27:30.121 { 00:27:30.121 "id": 4, 00:27:30.121 "state": "FREE", 00:27:30.121 "validity": 0.0 00:27:30.121 }, 00:27:30.121 { 00:27:30.121 "id": 5, 00:27:30.121 "state": "FREE", 00:27:30.121 "validity": 0.0 00:27:30.121 }, 00:27:30.121 { 00:27:30.121 "id": 6, 00:27:30.121 "state": "FREE", 00:27:30.121 "validity": 0.0 00:27:30.121 }, 00:27:30.121 { 00:27:30.121 "id": 7, 00:27:30.121 "state": "FREE", 00:27:30.121 "validity": 0.0 00:27:30.121 }, 00:27:30.121 { 00:27:30.121 "id": 8, 00:27:30.121 "state": "FREE", 00:27:30.121 "validity": 0.0 00:27:30.121 }, 00:27:30.121 { 00:27:30.121 "id": 9, 00:27:30.121 "state": "FREE", 00:27:30.121 "validity": 0.0 00:27:30.121 }, 00:27:30.121 { 00:27:30.121 "id": 10, 00:27:30.121 "state": "FREE", 00:27:30.121 "validity": 0.0 00:27:30.121 }, 00:27:30.121 { 00:27:30.121 "id": 11, 00:27:30.121 "state": "FREE", 00:27:30.121 "validity": 0.0 00:27:30.121 }, 00:27:30.121 { 00:27:30.121 "id": 12, 00:27:30.121 "state": "FREE", 00:27:30.121 "validity": 0.0 00:27:30.121 }, 00:27:30.121 { 00:27:30.121 "id": 13, 00:27:30.121 "state": "FREE", 00:27:30.121 "validity": 0.0 00:27:30.121 }, 00:27:30.121 { 00:27:30.121 "id": 14, 00:27:30.121 "state": "FREE", 00:27:30.121 "validity": 0.0 00:27:30.121 }, 00:27:30.121 { 00:27:30.121 "id": 15, 00:27:30.121 "state": "FREE", 00:27:30.121 "validity": 0.0 00:27:30.121 }, 00:27:30.121 { 00:27:30.121 "id": 16, 00:27:30.121 "state": "FREE", 00:27:30.121 "validity": 0.0 00:27:30.121 }, 00:27:30.121 { 00:27:30.121 "id": 17, 00:27:30.121 "state": "FREE", 00:27:30.121 "validity": 0.0 00:27:30.121 } 00:27:30.121 ], 00:27:30.121 "read-only": true 00:27:30.121 }, 00:27:30.121 { 00:27:30.121 "name": "cache_device", 00:27:30.121 "type": "bdev", 00:27:30.121 "chunks": [ 00:27:30.121 { 00:27:30.121 "id": 0, 00:27:30.121 "state": "INACTIVE", 00:27:30.121 "utilization": 0.0 00:27:30.121 }, 00:27:30.121 { 00:27:30.121 "id": 1, 00:27:30.121 "state": "CLOSED", 00:27:30.121 "utilization": 1.0 00:27:30.121 }, 00:27:30.121 { 00:27:30.121 "id": 2, 00:27:30.121 "state": "CLOSED", 00:27:30.121 "utilization": 1.0 00:27:30.121 }, 00:27:30.121 { 00:27:30.121 "id": 3, 00:27:30.121 "state": "OPEN", 00:27:30.121 "utilization": 0.001953125 00:27:30.121 }, 00:27:30.121 { 00:27:30.121 "id": 4, 00:27:30.121 "state": "OPEN", 00:27:30.121 "utilization": 0.0 00:27:30.121 } 00:27:30.121 ], 00:27:30.121 "read-only": true 00:27:30.121 }, 00:27:30.121 { 00:27:30.121 "name": "verbose_mode", 00:27:30.121 "value": true, 00:27:30.121 "unit": "", 00:27:30.121 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:30.121 }, 00:27:30.121 { 00:27:30.121 "name": "prep_upgrade_on_shutdown", 00:27:30.121 "value": false, 00:27:30.121 "unit": "", 00:27:30.121 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:30.121 } 00:27:30.121 ] 00:27:30.121 } 00:27:30.121 16:54:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:30.379 [2024-11-20 16:54:15.113591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.379 [2024-11-20 16:54:15.113631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:30.379 [2024-11-20 16:54:15.113641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:30.379 [2024-11-20 16:54:15.113647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.379 [2024-11-20 16:54:15.113664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.379 [2024-11-20 16:54:15.113670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:30.379 [2024-11-20 16:54:15.113677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:30.380 [2024-11-20 16:54:15.113683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.380 [2024-11-20 16:54:15.113697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.380 [2024-11-20 16:54:15.113703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:30.380 [2024-11-20 16:54:15.113709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:30.380 [2024-11-20 16:54:15.113715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.380 [2024-11-20 16:54:15.113760] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.161 ms, result 0 00:27:30.380 true 00:27:30.380 16:54:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:30.380 16:54:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:30.380 16:54:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:30.637 16:54:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:30.637 16:54:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:30.637 16:54:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:30.895 [2024-11-20 16:54:15.529889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.895 [2024-11-20 16:54:15.529925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:30.895 [2024-11-20 16:54:15.529934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:30.895 [2024-11-20 16:54:15.529941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.895 [2024-11-20 16:54:15.529958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.895 [2024-11-20 16:54:15.529964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:30.895 [2024-11-20 16:54:15.529970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:30.895 [2024-11-20 16:54:15.529976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.895 [2024-11-20 16:54:15.529991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.895 [2024-11-20 16:54:15.529997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:30.895 [2024-11-20 16:54:15.530003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:30.895 [2024-11-20 16:54:15.530008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.895 [2024-11-20 16:54:15.530051] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.153 ms, result 0 00:27:30.895 true 00:27:30.895 16:54:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:30.895 { 00:27:30.895 "name": "ftl", 00:27:30.895 "properties": [ 00:27:30.895 { 00:27:30.895 "name": "superblock_version", 00:27:30.895 "value": 5, 00:27:30.895 "read-only": true 00:27:30.895 }, 00:27:30.895 { 00:27:30.895 "name": "base_device", 00:27:30.895 "bands": [ 00:27:30.895 { 00:27:30.895 "id": 0, 00:27:30.895 "state": "FREE", 00:27:30.895 "validity": 0.0 00:27:30.895 }, 00:27:30.895 { 00:27:30.895 "id": 1, 00:27:30.895 "state": "FREE", 00:27:30.895 "validity": 0.0 00:27:30.895 }, 00:27:30.895 { 00:27:30.895 "id": 2, 00:27:30.895 "state": "FREE", 00:27:30.895 "validity": 0.0 00:27:30.895 }, 00:27:30.895 { 00:27:30.895 "id": 3, 00:27:30.895 "state": "FREE", 00:27:30.895 "validity": 0.0 00:27:30.895 }, 00:27:30.895 { 00:27:30.895 "id": 4, 00:27:30.895 "state": "FREE", 00:27:30.895 "validity": 0.0 00:27:30.895 }, 00:27:30.895 { 00:27:30.895 "id": 5, 00:27:30.895 "state": "FREE", 00:27:30.895 "validity": 0.0 00:27:30.895 }, 00:27:30.895 { 00:27:30.895 "id": 6, 00:27:30.895 "state": "FREE", 00:27:30.895 "validity": 0.0 00:27:30.895 }, 00:27:30.895 { 00:27:30.895 "id": 7, 00:27:30.895 "state": "FREE", 00:27:30.895 "validity": 0.0 00:27:30.895 }, 00:27:30.895 { 00:27:30.895 "id": 8, 00:27:30.895 "state": "FREE", 00:27:30.895 "validity": 0.0 00:27:30.895 }, 00:27:30.895 { 00:27:30.895 "id": 9, 00:27:30.895 "state": "FREE", 00:27:30.895 "validity": 0.0 00:27:30.895 }, 00:27:30.895 { 00:27:30.895 "id": 10, 00:27:30.895 "state": "FREE", 00:27:30.895 "validity": 0.0 00:27:30.895 }, 00:27:30.895 { 00:27:30.895 "id": 11, 00:27:30.895 "state": "FREE", 00:27:30.895 "validity": 0.0 00:27:30.895 }, 00:27:30.895 { 00:27:30.895 "id": 12, 00:27:30.895 "state": "FREE", 00:27:30.895 "validity": 0.0 00:27:30.895 }, 00:27:30.895 { 00:27:30.895 "id": 13, 00:27:30.895 "state": "FREE", 00:27:30.895 "validity": 0.0 00:27:30.895 }, 00:27:30.895 { 00:27:30.895 "id": 14, 00:27:30.895 "state": "FREE", 00:27:30.895 "validity": 0.0 00:27:30.895 }, 00:27:30.895 { 00:27:30.895 "id": 15, 00:27:30.895 "state": "FREE", 00:27:30.895 "validity": 0.0 00:27:30.895 }, 00:27:30.895 { 00:27:30.895 "id": 16, 00:27:30.895 "state": "FREE", 00:27:30.895 "validity": 0.0 00:27:30.895 }, 00:27:30.895 { 00:27:30.895 "id": 17, 00:27:30.895 "state": "FREE", 00:27:30.895 "validity": 0.0 00:27:30.895 } 00:27:30.895 ], 00:27:30.895 "read-only": true 00:27:30.895 }, 00:27:30.895 { 00:27:30.895 "name": "cache_device", 00:27:30.895 "type": "bdev", 00:27:30.895 "chunks": [ 00:27:30.895 { 00:27:30.895 "id": 0, 00:27:30.895 "state": "INACTIVE", 00:27:30.895 "utilization": 0.0 00:27:30.895 }, 00:27:30.895 { 00:27:30.895 "id": 1, 00:27:30.895 "state": "CLOSED", 00:27:30.895 "utilization": 1.0 00:27:30.895 }, 00:27:30.895 { 00:27:30.895 "id": 2, 00:27:30.895 "state": "CLOSED", 00:27:30.895 "utilization": 1.0 00:27:30.895 }, 00:27:30.895 { 00:27:30.895 "id": 3, 00:27:30.895 "state": "OPEN", 00:27:30.895 "utilization": 0.001953125 00:27:30.895 }, 00:27:30.895 { 00:27:30.895 "id": 4, 00:27:30.895 "state": "OPEN", 00:27:30.895 "utilization": 0.0 00:27:30.895 } 00:27:30.895 ], 00:27:30.895 "read-only": true 00:27:30.895 }, 00:27:30.895 { 00:27:30.895 "name": "verbose_mode", 00:27:30.895 "value": true, 00:27:30.895 "unit": "", 00:27:30.895 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:30.895 }, 00:27:30.895 { 00:27:30.895 "name": "prep_upgrade_on_shutdown", 00:27:30.895 "value": true, 00:27:30.895 "unit": "", 00:27:30.895 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:30.895 } 00:27:30.895 ] 00:27:30.895 } 00:27:30.895 16:54:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:30.895 16:54:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80571 ]] 00:27:30.895 16:54:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80571 00:27:30.895 16:54:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80571 ']' 00:27:30.895 16:54:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 80571 00:27:30.895 16:54:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:27:30.895 16:54:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:30.895 16:54:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80571 00:27:31.153 killing process with pid 80571 00:27:31.153 16:54:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:31.153 16:54:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:31.153 16:54:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80571' 00:27:31.153 16:54:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 80571 00:27:31.153 16:54:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 80571 00:27:31.722 [2024-11-20 16:54:16.331737] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:31.722 [2024-11-20 16:54:16.343683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.722 [2024-11-20 16:54:16.343720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:31.722 [2024-11-20 16:54:16.343730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:31.722 [2024-11-20 16:54:16.343737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.722 [2024-11-20 16:54:16.343754] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:31.722 [2024-11-20 16:54:16.345836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.722 [2024-11-20 16:54:16.345860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:31.722 [2024-11-20 16:54:16.345868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.072 ms 00:27:31.722 [2024-11-20 16:54:16.345875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.959 [2024-11-20 16:54:24.232176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.959 [2024-11-20 16:54:24.232242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:39.959 [2024-11-20 16:54:24.232257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7886.257 ms 00:27:39.959 [2024-11-20 16:54:24.232270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.959 [2024-11-20 16:54:24.233563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.959 [2024-11-20 16:54:24.233583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:39.959 [2024-11-20 16:54:24.233592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.276 ms 00:27:39.959 [2024-11-20 16:54:24.233600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.959 [2024-11-20 16:54:24.234729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.959 [2024-11-20 16:54:24.234750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:39.959 [2024-11-20 16:54:24.234760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.105 ms 00:27:39.959 [2024-11-20 16:54:24.234769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.959 [2024-11-20 16:54:24.244241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.959 [2024-11-20 16:54:24.244280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:39.959 [2024-11-20 16:54:24.244290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.428 ms 00:27:39.959 [2024-11-20 16:54:24.244298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.959 [2024-11-20 16:54:24.250914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.959 [2024-11-20 16:54:24.250946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:39.959 [2024-11-20 16:54:24.250957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.584 ms 00:27:39.959 [2024-11-20 16:54:24.250965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.959 [2024-11-20 16:54:24.251046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.959 [2024-11-20 16:54:24.251056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:39.959 [2024-11-20 16:54:24.251069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:27:39.959 [2024-11-20 16:54:24.251077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.959 [2024-11-20 16:54:24.259750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.959 [2024-11-20 16:54:24.259777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:39.959 [2024-11-20 16:54:24.259785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.658 ms 00:27:39.959 [2024-11-20 16:54:24.259793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.959 [2024-11-20 16:54:24.268703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.959 [2024-11-20 16:54:24.268736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:39.959 [2024-11-20 16:54:24.268744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.880 ms 00:27:39.959 [2024-11-20 16:54:24.268752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.959 [2024-11-20 16:54:24.277854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.959 [2024-11-20 16:54:24.277886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:39.959 [2024-11-20 16:54:24.277895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.070 ms 00:27:39.959 [2024-11-20 16:54:24.277902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.959 [2024-11-20 16:54:24.287058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.959 [2024-11-20 16:54:24.287086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:39.959 [2024-11-20 16:54:24.287095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.092 ms 00:27:39.959 [2024-11-20 16:54:24.287102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.959 [2024-11-20 16:54:24.287132] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:39.959 [2024-11-20 16:54:24.287147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:39.959 [2024-11-20 16:54:24.287158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:39.959 [2024-11-20 16:54:24.287173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:39.959 [2024-11-20 16:54:24.287182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:39.959 [2024-11-20 16:54:24.287190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:39.959 [2024-11-20 16:54:24.287211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:39.959 [2024-11-20 16:54:24.287220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:39.959 [2024-11-20 16:54:24.287228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:39.959 [2024-11-20 16:54:24.287235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:39.959 [2024-11-20 16:54:24.287243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:39.959 [2024-11-20 16:54:24.287250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:39.959 [2024-11-20 16:54:24.287257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:39.959 [2024-11-20 16:54:24.287264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:39.959 [2024-11-20 16:54:24.287271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:39.959 [2024-11-20 16:54:24.287278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:39.959 [2024-11-20 16:54:24.287285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:39.959 [2024-11-20 16:54:24.287292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:39.959 [2024-11-20 16:54:24.287299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:39.959 [2024-11-20 16:54:24.287309] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:39.959 [2024-11-20 16:54:24.287318] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: f6a6a3bc-f32b-4fce-8012-5f5288180d34 00:27:39.959 [2024-11-20 16:54:24.287325] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:39.959 [2024-11-20 16:54:24.287332] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:39.959 [2024-11-20 16:54:24.287338] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:39.959 [2024-11-20 16:54:24.287347] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:39.959 [2024-11-20 16:54:24.287354] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:39.959 [2024-11-20 16:54:24.287364] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:39.959 [2024-11-20 16:54:24.287370] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:39.959 [2024-11-20 16:54:24.287387] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:39.959 [2024-11-20 16:54:24.287394] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:39.959 [2024-11-20 16:54:24.287401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.959 [2024-11-20 16:54:24.287411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:39.959 [2024-11-20 16:54:24.287419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.269 ms 00:27:39.959 [2024-11-20 16:54:24.287426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.959 [2024-11-20 16:54:24.299591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.960 [2024-11-20 16:54:24.299621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:39.960 [2024-11-20 16:54:24.299631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.148 ms 00:27:39.960 [2024-11-20 16:54:24.299643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.960 [2024-11-20 16:54:24.299980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.960 [2024-11-20 16:54:24.299995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:39.960 [2024-11-20 16:54:24.300004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.314 ms 00:27:39.960 [2024-11-20 16:54:24.300011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.960 [2024-11-20 16:54:24.341568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:39.960 [2024-11-20 16:54:24.341615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:39.960 [2024-11-20 16:54:24.341631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:39.960 [2024-11-20 16:54:24.341639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.960 [2024-11-20 16:54:24.341680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:39.960 [2024-11-20 16:54:24.341688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:39.960 [2024-11-20 16:54:24.341696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:39.960 [2024-11-20 16:54:24.341703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.960 [2024-11-20 16:54:24.341792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:39.960 [2024-11-20 16:54:24.341801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:39.960 [2024-11-20 16:54:24.341810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:39.960 [2024-11-20 16:54:24.341817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.960 [2024-11-20 16:54:24.341836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:39.960 [2024-11-20 16:54:24.341844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:39.960 [2024-11-20 16:54:24.341851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:39.960 [2024-11-20 16:54:24.341858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.960 [2024-11-20 16:54:24.415644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:39.960 [2024-11-20 16:54:24.415682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:39.960 [2024-11-20 16:54:24.415692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:39.960 [2024-11-20 16:54:24.415703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.960 [2024-11-20 16:54:24.465982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:39.960 [2024-11-20 16:54:24.466019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:39.960 [2024-11-20 16:54:24.466029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:39.960 [2024-11-20 16:54:24.466036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.960 [2024-11-20 16:54:24.466096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:39.960 [2024-11-20 16:54:24.466104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:39.960 [2024-11-20 16:54:24.466110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:39.960 [2024-11-20 16:54:24.466116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.960 [2024-11-20 16:54:24.466168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:39.960 [2024-11-20 16:54:24.466176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:39.960 [2024-11-20 16:54:24.466182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:39.960 [2024-11-20 16:54:24.466187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.960 [2024-11-20 16:54:24.466262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:39.960 [2024-11-20 16:54:24.466269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:39.960 [2024-11-20 16:54:24.466275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:39.960 [2024-11-20 16:54:24.466281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.960 [2024-11-20 16:54:24.466303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:39.960 [2024-11-20 16:54:24.466313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:39.960 [2024-11-20 16:54:24.466319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:39.960 [2024-11-20 16:54:24.466325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.960 [2024-11-20 16:54:24.466353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:39.960 [2024-11-20 16:54:24.466360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:39.960 [2024-11-20 16:54:24.466366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:39.960 [2024-11-20 16:54:24.466373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.960 [2024-11-20 16:54:24.466419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:39.960 [2024-11-20 16:54:24.466428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:39.960 [2024-11-20 16:54:24.466434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:39.960 [2024-11-20 16:54:24.466440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.960 [2024-11-20 16:54:24.466529] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8122.806 ms, result 0 00:27:46.513 16:54:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:46.513 16:54:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:46.514 16:54:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:46.514 16:54:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:46.514 16:54:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:46.514 16:54:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81092 00:27:46.514 16:54:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:46.514 16:54:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81092 00:27:46.514 16:54:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:46.514 16:54:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81092 ']' 00:27:46.514 16:54:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.514 16:54:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:46.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.514 16:54:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.514 16:54:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:46.514 16:54:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:46.514 [2024-11-20 16:54:30.792088] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:27:46.514 [2024-11-20 16:54:30.792228] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81092 ] 00:27:46.514 [2024-11-20 16:54:30.949578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.514 [2024-11-20 16:54:31.034585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.771 [2024-11-20 16:54:31.619644] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:46.771 [2024-11-20 16:54:31.619696] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:47.030 [2024-11-20 16:54:31.763235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.030 [2024-11-20 16:54:31.763287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:47.030 [2024-11-20 16:54:31.763298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:47.030 [2024-11-20 16:54:31.763305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.030 [2024-11-20 16:54:31.763350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.030 [2024-11-20 16:54:31.763358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:47.030 [2024-11-20 16:54:31.763365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:27:47.030 [2024-11-20 16:54:31.763370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.030 [2024-11-20 16:54:31.763399] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:47.030 [2024-11-20 16:54:31.763987] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:47.030 [2024-11-20 16:54:31.764005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.030 [2024-11-20 16:54:31.764011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:47.030 [2024-11-20 16:54:31.764018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.614 ms 00:27:47.030 [2024-11-20 16:54:31.764024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.030 [2024-11-20 16:54:31.765086] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:47.030 [2024-11-20 16:54:31.774775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.030 [2024-11-20 16:54:31.774806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:47.030 [2024-11-20 16:54:31.774820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.690 ms 00:27:47.030 [2024-11-20 16:54:31.774826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.030 [2024-11-20 16:54:31.774878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.030 [2024-11-20 16:54:31.774886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:47.030 [2024-11-20 16:54:31.774893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:47.030 [2024-11-20 16:54:31.774898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.030 [2024-11-20 16:54:31.779676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.030 [2024-11-20 16:54:31.779705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:47.030 [2024-11-20 16:54:31.779713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.726 ms 00:27:47.030 [2024-11-20 16:54:31.779720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.030 [2024-11-20 16:54:31.779766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.030 [2024-11-20 16:54:31.779774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:47.030 [2024-11-20 16:54:31.779780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:27:47.030 [2024-11-20 16:54:31.779786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.030 [2024-11-20 16:54:31.779826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.030 [2024-11-20 16:54:31.779833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:47.030 [2024-11-20 16:54:31.779842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:47.030 [2024-11-20 16:54:31.779848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.030 [2024-11-20 16:54:31.779866] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:47.030 [2024-11-20 16:54:31.782535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.030 [2024-11-20 16:54:31.782560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:47.030 [2024-11-20 16:54:31.782567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.674 ms 00:27:47.030 [2024-11-20 16:54:31.782576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.030 [2024-11-20 16:54:31.782598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.030 [2024-11-20 16:54:31.782605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:47.030 [2024-11-20 16:54:31.782612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:47.030 [2024-11-20 16:54:31.782618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.030 [2024-11-20 16:54:31.782636] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:47.030 [2024-11-20 16:54:31.782650] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:47.030 [2024-11-20 16:54:31.782679] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:47.030 [2024-11-20 16:54:31.782691] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:47.030 [2024-11-20 16:54:31.782773] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:47.030 [2024-11-20 16:54:31.782782] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:47.030 [2024-11-20 16:54:31.782790] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:47.030 [2024-11-20 16:54:31.782798] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:47.030 [2024-11-20 16:54:31.782805] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:47.030 [2024-11-20 16:54:31.782813] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:47.030 [2024-11-20 16:54:31.782819] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:47.030 [2024-11-20 16:54:31.782825] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:47.030 [2024-11-20 16:54:31.782831] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:47.030 [2024-11-20 16:54:31.782837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.030 [2024-11-20 16:54:31.782843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:47.030 [2024-11-20 16:54:31.782849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.203 ms 00:27:47.031 [2024-11-20 16:54:31.782855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.031 [2024-11-20 16:54:31.782921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.031 [2024-11-20 16:54:31.782928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:47.031 [2024-11-20 16:54:31.782934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:27:47.031 [2024-11-20 16:54:31.782941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.031 [2024-11-20 16:54:31.783036] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:47.031 [2024-11-20 16:54:31.783084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:47.031 [2024-11-20 16:54:31.783091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:47.031 [2024-11-20 16:54:31.783097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.031 [2024-11-20 16:54:31.783104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:47.031 [2024-11-20 16:54:31.783110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:47.031 [2024-11-20 16:54:31.783115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:47.031 [2024-11-20 16:54:31.783121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:47.031 [2024-11-20 16:54:31.783126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:47.031 [2024-11-20 16:54:31.783131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.031 [2024-11-20 16:54:31.783137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:47.031 [2024-11-20 16:54:31.783142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:47.031 [2024-11-20 16:54:31.783148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.031 [2024-11-20 16:54:31.783154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:47.031 [2024-11-20 16:54:31.783159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:47.031 [2024-11-20 16:54:31.783164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.031 [2024-11-20 16:54:31.783169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:47.031 [2024-11-20 16:54:31.783174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:47.031 [2024-11-20 16:54:31.783180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.031 [2024-11-20 16:54:31.783185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:47.031 [2024-11-20 16:54:31.783189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:47.031 [2024-11-20 16:54:31.783195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:47.031 [2024-11-20 16:54:31.783200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:47.031 [2024-11-20 16:54:31.783205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:47.031 [2024-11-20 16:54:31.783210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:47.031 [2024-11-20 16:54:31.783220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:47.031 [2024-11-20 16:54:31.783225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:47.031 [2024-11-20 16:54:31.783230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:47.031 [2024-11-20 16:54:31.783236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:47.031 [2024-11-20 16:54:31.783241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:47.031 [2024-11-20 16:54:31.783246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:47.031 [2024-11-20 16:54:31.783251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:47.031 [2024-11-20 16:54:31.783256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:47.031 [2024-11-20 16:54:31.783261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.031 [2024-11-20 16:54:31.783266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:47.031 [2024-11-20 16:54:31.783271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:47.031 [2024-11-20 16:54:31.783276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.031 [2024-11-20 16:54:31.783281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:47.031 [2024-11-20 16:54:31.783286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:47.031 [2024-11-20 16:54:31.783291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.031 [2024-11-20 16:54:31.783296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:47.031 [2024-11-20 16:54:31.783301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:47.031 [2024-11-20 16:54:31.783306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.031 [2024-11-20 16:54:31.783311] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:47.031 [2024-11-20 16:54:31.783319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:47.031 [2024-11-20 16:54:31.783325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:47.031 [2024-11-20 16:54:31.783331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.031 [2024-11-20 16:54:31.783338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:47.031 [2024-11-20 16:54:31.783344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:47.031 [2024-11-20 16:54:31.783349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:47.031 [2024-11-20 16:54:31.783355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:47.031 [2024-11-20 16:54:31.783360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:47.031 [2024-11-20 16:54:31.783365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:47.031 [2024-11-20 16:54:31.783371] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:47.031 [2024-11-20 16:54:31.783388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:47.031 [2024-11-20 16:54:31.783395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:47.031 [2024-11-20 16:54:31.783401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:47.031 [2024-11-20 16:54:31.783407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:47.031 [2024-11-20 16:54:31.783412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:47.031 [2024-11-20 16:54:31.783418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:47.031 [2024-11-20 16:54:31.783424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:47.031 [2024-11-20 16:54:31.783430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:47.031 [2024-11-20 16:54:31.783436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:47.031 [2024-11-20 16:54:31.783441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:47.031 [2024-11-20 16:54:31.783447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:47.031 [2024-11-20 16:54:31.783452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:47.031 [2024-11-20 16:54:31.783458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:47.031 [2024-11-20 16:54:31.783463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:47.031 [2024-11-20 16:54:31.783469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:47.031 [2024-11-20 16:54:31.783475] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:47.031 [2024-11-20 16:54:31.783482] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:47.031 [2024-11-20 16:54:31.783488] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:47.031 [2024-11-20 16:54:31.783494] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:47.031 [2024-11-20 16:54:31.783499] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:47.031 [2024-11-20 16:54:31.783505] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:47.031 [2024-11-20 16:54:31.783511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.031 [2024-11-20 16:54:31.783517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:47.031 [2024-11-20 16:54:31.783523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.528 ms 00:27:47.031 [2024-11-20 16:54:31.783529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.031 [2024-11-20 16:54:31.783563] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:47.031 [2024-11-20 16:54:31.783571] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:49.648 [2024-11-20 16:54:33.906941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.648 [2024-11-20 16:54:33.907003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:49.648 [2024-11-20 16:54:33.907018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2123.369 ms 00:27:49.648 [2024-11-20 16:54:33.907027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.648 [2024-11-20 16:54:33.932532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.648 [2024-11-20 16:54:33.932580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:49.648 [2024-11-20 16:54:33.932593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.289 ms 00:27:49.648 [2024-11-20 16:54:33.932601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.648 [2024-11-20 16:54:33.932699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.648 [2024-11-20 16:54:33.932715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:49.648 [2024-11-20 16:54:33.932724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:49.648 [2024-11-20 16:54:33.932731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.648 [2024-11-20 16:54:33.962967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.648 [2024-11-20 16:54:33.963004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:49.648 [2024-11-20 16:54:33.963015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.198 ms 00:27:49.648 [2024-11-20 16:54:33.963026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.648 [2024-11-20 16:54:33.963062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.648 [2024-11-20 16:54:33.963071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:49.648 [2024-11-20 16:54:33.963079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:49.648 [2024-11-20 16:54:33.963086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.648 [2024-11-20 16:54:33.963472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.648 [2024-11-20 16:54:33.963494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:49.648 [2024-11-20 16:54:33.963503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.313 ms 00:27:49.648 [2024-11-20 16:54:33.963510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.648 [2024-11-20 16:54:33.963559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.648 [2024-11-20 16:54:33.963567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:49.648 [2024-11-20 16:54:33.963575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:49.648 [2024-11-20 16:54:33.963582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.648 [2024-11-20 16:54:33.977602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.648 [2024-11-20 16:54:33.977633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:49.648 [2024-11-20 16:54:33.977643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.998 ms 00:27:49.648 [2024-11-20 16:54:33.977651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.648 [2024-11-20 16:54:33.990030] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:49.648 [2024-11-20 16:54:33.990065] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:49.648 [2024-11-20 16:54:33.990077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.648 [2024-11-20 16:54:33.990086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:49.648 [2024-11-20 16:54:33.990096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.320 ms 00:27:49.648 [2024-11-20 16:54:33.990102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.648 [2024-11-20 16:54:34.003558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.648 [2024-11-20 16:54:34.003591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:49.648 [2024-11-20 16:54:34.003602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.416 ms 00:27:49.648 [2024-11-20 16:54:34.003610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.648 [2024-11-20 16:54:34.014555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.648 [2024-11-20 16:54:34.014584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:49.648 [2024-11-20 16:54:34.014594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.908 ms 00:27:49.648 [2024-11-20 16:54:34.014602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.648 [2024-11-20 16:54:34.025778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.648 [2024-11-20 16:54:34.025809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:49.648 [2024-11-20 16:54:34.025818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.142 ms 00:27:49.648 [2024-11-20 16:54:34.025826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.648 [2024-11-20 16:54:34.026449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.648 [2024-11-20 16:54:34.026476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:49.648 [2024-11-20 16:54:34.026485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.534 ms 00:27:49.648 [2024-11-20 16:54:34.026492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.648 [2024-11-20 16:54:34.093097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.648 [2024-11-20 16:54:34.093162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:49.648 [2024-11-20 16:54:34.093176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 66.583 ms 00:27:49.648 [2024-11-20 16:54:34.093184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.648 [2024-11-20 16:54:34.103431] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:49.648 [2024-11-20 16:54:34.104184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.648 [2024-11-20 16:54:34.104212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:49.648 [2024-11-20 16:54:34.104223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.942 ms 00:27:49.648 [2024-11-20 16:54:34.104231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.648 [2024-11-20 16:54:34.104335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.648 [2024-11-20 16:54:34.104358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:49.648 [2024-11-20 16:54:34.104367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:49.648 [2024-11-20 16:54:34.104375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.648 [2024-11-20 16:54:34.104453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.648 [2024-11-20 16:54:34.104463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:49.648 [2024-11-20 16:54:34.104472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:27:49.648 [2024-11-20 16:54:34.104479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.648 [2024-11-20 16:54:34.104503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.648 [2024-11-20 16:54:34.104512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:49.648 [2024-11-20 16:54:34.104520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:49.648 [2024-11-20 16:54:34.104530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.648 [2024-11-20 16:54:34.104557] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:49.648 [2024-11-20 16:54:34.104566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.648 [2024-11-20 16:54:34.104574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:49.648 [2024-11-20 16:54:34.104581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:49.648 [2024-11-20 16:54:34.104588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.648 [2024-11-20 16:54:34.127397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.648 [2024-11-20 16:54:34.127434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:49.648 [2024-11-20 16:54:34.127445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.785 ms 00:27:49.648 [2024-11-20 16:54:34.127454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.648 [2024-11-20 16:54:34.127527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.648 [2024-11-20 16:54:34.127536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:49.648 [2024-11-20 16:54:34.127544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:27:49.648 [2024-11-20 16:54:34.127551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.648 [2024-11-20 16:54:34.128480] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2364.803 ms, result 0 00:27:49.648 [2024-11-20 16:54:34.143749] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:49.648 [2024-11-20 16:54:34.159725] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:49.648 [2024-11-20 16:54:34.167846] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:49.648 16:54:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:49.649 16:54:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:49.649 16:54:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:49.649 16:54:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:49.649 16:54:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:49.649 [2024-11-20 16:54:34.355882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.649 [2024-11-20 16:54:34.355929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:49.649 [2024-11-20 16:54:34.355942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:49.649 [2024-11-20 16:54:34.355953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.649 [2024-11-20 16:54:34.355975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.649 [2024-11-20 16:54:34.355984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:49.649 [2024-11-20 16:54:34.355992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:49.649 [2024-11-20 16:54:34.355999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.649 [2024-11-20 16:54:34.356018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.649 [2024-11-20 16:54:34.356026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:49.649 [2024-11-20 16:54:34.356033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:49.649 [2024-11-20 16:54:34.356040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.649 [2024-11-20 16:54:34.356097] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.204 ms, result 0 00:27:49.649 true 00:27:49.649 16:54:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:49.906 { 00:27:49.906 "name": "ftl", 00:27:49.906 "properties": [ 00:27:49.906 { 00:27:49.906 "name": "superblock_version", 00:27:49.906 "value": 5, 00:27:49.906 "read-only": true 00:27:49.906 }, 00:27:49.906 { 00:27:49.906 "name": "base_device", 00:27:49.906 "bands": [ 00:27:49.906 { 00:27:49.906 "id": 0, 00:27:49.906 "state": "CLOSED", 00:27:49.906 "validity": 1.0 00:27:49.906 }, 00:27:49.906 { 00:27:49.906 "id": 1, 00:27:49.906 "state": "CLOSED", 00:27:49.906 "validity": 1.0 00:27:49.906 }, 00:27:49.906 { 00:27:49.906 "id": 2, 00:27:49.906 "state": "CLOSED", 00:27:49.906 "validity": 0.007843137254901933 00:27:49.906 }, 00:27:49.906 { 00:27:49.906 "id": 3, 00:27:49.906 "state": "FREE", 00:27:49.906 "validity": 0.0 00:27:49.906 }, 00:27:49.906 { 00:27:49.906 "id": 4, 00:27:49.906 "state": "FREE", 00:27:49.906 "validity": 0.0 00:27:49.906 }, 00:27:49.906 { 00:27:49.906 "id": 5, 00:27:49.906 "state": "FREE", 00:27:49.906 "validity": 0.0 00:27:49.906 }, 00:27:49.906 { 00:27:49.906 "id": 6, 00:27:49.906 "state": "FREE", 00:27:49.906 "validity": 0.0 00:27:49.906 }, 00:27:49.906 { 00:27:49.906 "id": 7, 00:27:49.906 "state": "FREE", 00:27:49.906 "validity": 0.0 00:27:49.906 }, 00:27:49.906 { 00:27:49.906 "id": 8, 00:27:49.906 "state": "FREE", 00:27:49.906 "validity": 0.0 00:27:49.906 }, 00:27:49.906 { 00:27:49.906 "id": 9, 00:27:49.906 "state": "FREE", 00:27:49.906 "validity": 0.0 00:27:49.906 }, 00:27:49.906 { 00:27:49.906 "id": 10, 00:27:49.906 "state": "FREE", 00:27:49.906 "validity": 0.0 00:27:49.906 }, 00:27:49.906 { 00:27:49.906 "id": 11, 00:27:49.906 "state": "FREE", 00:27:49.906 "validity": 0.0 00:27:49.906 }, 00:27:49.906 { 00:27:49.906 "id": 12, 00:27:49.906 "state": "FREE", 00:27:49.906 "validity": 0.0 00:27:49.906 }, 00:27:49.906 { 00:27:49.906 "id": 13, 00:27:49.906 "state": "FREE", 00:27:49.906 "validity": 0.0 00:27:49.906 }, 00:27:49.906 { 00:27:49.906 "id": 14, 00:27:49.906 "state": "FREE", 00:27:49.906 "validity": 0.0 00:27:49.906 }, 00:27:49.906 { 00:27:49.906 "id": 15, 00:27:49.906 "state": "FREE", 00:27:49.906 "validity": 0.0 00:27:49.906 }, 00:27:49.907 { 00:27:49.907 "id": 16, 00:27:49.907 "state": "FREE", 00:27:49.907 "validity": 0.0 00:27:49.907 }, 00:27:49.907 { 00:27:49.907 "id": 17, 00:27:49.907 "state": "FREE", 00:27:49.907 "validity": 0.0 00:27:49.907 } 00:27:49.907 ], 00:27:49.907 "read-only": true 00:27:49.907 }, 00:27:49.907 { 00:27:49.907 "name": "cache_device", 00:27:49.907 "type": "bdev", 00:27:49.907 "chunks": [ 00:27:49.907 { 00:27:49.907 "id": 0, 00:27:49.907 "state": "INACTIVE", 00:27:49.907 "utilization": 0.0 00:27:49.907 }, 00:27:49.907 { 00:27:49.907 "id": 1, 00:27:49.907 "state": "OPEN", 00:27:49.907 "utilization": 0.0 00:27:49.907 }, 00:27:49.907 { 00:27:49.907 "id": 2, 00:27:49.907 "state": "OPEN", 00:27:49.907 "utilization": 0.0 00:27:49.907 }, 00:27:49.907 { 00:27:49.907 "id": 3, 00:27:49.907 "state": "FREE", 00:27:49.907 "utilization": 0.0 00:27:49.907 }, 00:27:49.907 { 00:27:49.907 "id": 4, 00:27:49.907 "state": "FREE", 00:27:49.907 "utilization": 0.0 00:27:49.907 } 00:27:49.907 ], 00:27:49.907 "read-only": true 00:27:49.907 }, 00:27:49.907 { 00:27:49.907 "name": "verbose_mode", 00:27:49.907 "value": true, 00:27:49.907 "unit": "", 00:27:49.907 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:49.907 }, 00:27:49.907 { 00:27:49.907 "name": "prep_upgrade_on_shutdown", 00:27:49.907 "value": false, 00:27:49.907 "unit": "", 00:27:49.907 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:49.907 } 00:27:49.907 ] 00:27:49.907 } 00:27:49.907 16:54:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:49.907 16:54:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:49.907 16:54:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:49.907 16:54:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:49.907 16:54:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:49.907 16:54:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:49.907 16:54:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:49.907 16:54:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:50.163 16:54:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:50.164 16:54:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:50.164 16:54:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:50.164 16:54:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:50.164 16:54:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:50.164 16:54:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:50.164 Validate MD5 checksum, iteration 1 00:27:50.164 16:54:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:50.164 16:54:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:50.164 16:54:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:50.164 16:54:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:50.164 16:54:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:50.164 16:54:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:50.164 16:54:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:50.164 [2024-11-20 16:54:35.000168] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:27:50.164 [2024-11-20 16:54:35.000262] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81153 ] 00:27:50.421 [2024-11-20 16:54:35.157405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.421 [2024-11-20 16:54:35.259836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.318  [2024-11-20T16:54:37.460Z] Copying: 693/1024 [MB] (693 MBps) [2024-11-20T16:54:42.720Z] Copying: 1024/1024 [MB] (average 680 MBps) 00:27:57.834 00:27:57.834 16:54:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:57.834 16:54:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:59.734 16:54:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:59.734 16:54:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=23ae04f08ef09a9577c7c655206ee0a1 00:27:59.734 16:54:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 23ae04f08ef09a9577c7c655206ee0a1 != \2\3\a\e\0\4\f\0\8\e\f\0\9\a\9\5\7\7\c\7\c\6\5\5\2\0\6\e\e\0\a\1 ]] 00:27:59.734 16:54:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:59.734 16:54:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:59.734 Validate MD5 checksum, iteration 2 00:27:59.734 16:54:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:59.734 16:54:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:59.734 16:54:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:59.734 16:54:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:59.734 16:54:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:59.734 16:54:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:59.734 16:54:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:59.991 [2024-11-20 16:54:44.658171] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:27:59.991 [2024-11-20 16:54:44.658290] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81259 ] 00:27:59.991 [2024-11-20 16:54:44.817967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.247 [2024-11-20 16:54:44.922527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.642  [2024-11-20T16:54:47.095Z] Copying: 640/1024 [MB] (640 MBps) [2024-11-20T16:54:48.470Z] Copying: 1024/1024 [MB] (average 631 MBps) 00:28:03.584 00:28:03.584 16:54:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:03.584 16:54:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:04.955 16:54:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:04.955 16:54:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b2c704c75051911523eb2d69e5e5560d 00:28:04.955 16:54:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b2c704c75051911523eb2d69e5e5560d != \b\2\c\7\0\4\c\7\5\0\5\1\9\1\1\5\2\3\e\b\2\d\6\9\e\5\e\5\5\6\0\d ]] 00:28:04.955 16:54:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:04.955 16:54:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:04.955 16:54:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:28:04.955 16:54:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 81092 ]] 00:28:04.955 16:54:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 81092 00:28:04.955 16:54:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:28:04.955 16:54:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:28:04.955 16:54:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:04.955 16:54:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:04.955 16:54:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:04.955 16:54:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:04.955 16:54:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81319 00:28:04.955 16:54:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:04.955 16:54:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81319 00:28:04.955 16:54:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81319 ']' 00:28:04.955 16:54:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.955 16:54:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:04.955 16:54:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:04.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:04.955 16:54:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:04.955 16:54:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:04.955 [2024-11-20 16:54:49.779733] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:28:04.955 [2024-11-20 16:54:49.779846] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81319 ] 00:28:05.212 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 81092 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:28:05.212 [2024-11-20 16:54:49.938274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.212 [2024-11-20 16:54:50.039585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.146 [2024-11-20 16:54:50.737243] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:06.146 [2024-11-20 16:54:50.737311] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:06.146 [2024-11-20 16:54:50.881472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.146 [2024-11-20 16:54:50.881522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:06.146 [2024-11-20 16:54:50.881535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:06.146 [2024-11-20 16:54:50.881543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.146 [2024-11-20 16:54:50.881600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.146 [2024-11-20 16:54:50.881610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:06.146 [2024-11-20 16:54:50.881618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:28:06.146 [2024-11-20 16:54:50.881626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.146 [2024-11-20 16:54:50.881651] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:06.146 [2024-11-20 16:54:50.882322] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:06.146 [2024-11-20 16:54:50.882338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.146 [2024-11-20 16:54:50.882345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:06.146 [2024-11-20 16:54:50.882353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.695 ms 00:28:06.146 [2024-11-20 16:54:50.882361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.146 [2024-11-20 16:54:50.882674] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:06.146 [2024-11-20 16:54:50.898256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.146 [2024-11-20 16:54:50.898308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:06.146 [2024-11-20 16:54:50.898322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.581 ms 00:28:06.146 [2024-11-20 16:54:50.898330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.146 [2024-11-20 16:54:50.907334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.146 [2024-11-20 16:54:50.907366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:06.146 [2024-11-20 16:54:50.907393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:28:06.146 [2024-11-20 16:54:50.907401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.146 [2024-11-20 16:54:50.907728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.146 [2024-11-20 16:54:50.907743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:06.146 [2024-11-20 16:54:50.907752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.243 ms 00:28:06.146 [2024-11-20 16:54:50.907759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.146 [2024-11-20 16:54:50.907806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.146 [2024-11-20 16:54:50.907821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:06.146 [2024-11-20 16:54:50.907832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:28:06.146 [2024-11-20 16:54:50.907844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.146 [2024-11-20 16:54:50.907868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.146 [2024-11-20 16:54:50.907880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:06.146 [2024-11-20 16:54:50.907891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:06.146 [2024-11-20 16:54:50.907902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.146 [2024-11-20 16:54:50.907926] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:06.146 [2024-11-20 16:54:50.911128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.146 [2024-11-20 16:54:50.911152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:06.147 [2024-11-20 16:54:50.911162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.207 ms 00:28:06.147 [2024-11-20 16:54:50.911168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.147 [2024-11-20 16:54:50.911198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.147 [2024-11-20 16:54:50.911207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:06.147 [2024-11-20 16:54:50.911215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:06.147 [2024-11-20 16:54:50.911222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.147 [2024-11-20 16:54:50.911242] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:06.147 [2024-11-20 16:54:50.911259] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:06.147 [2024-11-20 16:54:50.911298] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:06.147 [2024-11-20 16:54:50.911318] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:06.147 [2024-11-20 16:54:50.911432] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:06.147 [2024-11-20 16:54:50.911443] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:06.147 [2024-11-20 16:54:50.911456] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:06.147 [2024-11-20 16:54:50.911469] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:06.147 [2024-11-20 16:54:50.911482] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:06.147 [2024-11-20 16:54:50.911494] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:06.147 [2024-11-20 16:54:50.911504] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:06.147 [2024-11-20 16:54:50.911514] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:06.147 [2024-11-20 16:54:50.911525] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:06.147 [2024-11-20 16:54:50.911536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.147 [2024-11-20 16:54:50.911549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:06.147 [2024-11-20 16:54:50.911557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.295 ms 00:28:06.147 [2024-11-20 16:54:50.911564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.147 [2024-11-20 16:54:50.911648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.147 [2024-11-20 16:54:50.911656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:06.147 [2024-11-20 16:54:50.911664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:28:06.147 [2024-11-20 16:54:50.911670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.147 [2024-11-20 16:54:50.911789] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:06.147 [2024-11-20 16:54:50.911804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:06.147 [2024-11-20 16:54:50.911818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:06.147 [2024-11-20 16:54:50.911829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:06.147 [2024-11-20 16:54:50.911840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:06.147 [2024-11-20 16:54:50.911850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:06.147 [2024-11-20 16:54:50.911858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:06.147 [2024-11-20 16:54:50.911864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:06.147 [2024-11-20 16:54:50.911871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:06.147 [2024-11-20 16:54:50.911878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:06.147 [2024-11-20 16:54:50.911885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:06.147 [2024-11-20 16:54:50.911891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:06.147 [2024-11-20 16:54:50.911901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:06.147 [2024-11-20 16:54:50.911908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:06.147 [2024-11-20 16:54:50.911917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:06.147 [2024-11-20 16:54:50.911924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:06.147 [2024-11-20 16:54:50.911934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:06.147 [2024-11-20 16:54:50.911941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:06.147 [2024-11-20 16:54:50.911947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:06.147 [2024-11-20 16:54:50.911955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:06.147 [2024-11-20 16:54:50.911962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:06.147 [2024-11-20 16:54:50.911968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:06.147 [2024-11-20 16:54:50.911975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:06.147 [2024-11-20 16:54:50.911988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:06.147 [2024-11-20 16:54:50.911994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:06.147 [2024-11-20 16:54:50.912000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:06.147 [2024-11-20 16:54:50.912007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:06.147 [2024-11-20 16:54:50.912013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:06.147 [2024-11-20 16:54:50.912023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:06.147 [2024-11-20 16:54:50.912030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:06.147 [2024-11-20 16:54:50.912039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:06.147 [2024-11-20 16:54:50.912046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:06.147 [2024-11-20 16:54:50.912053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:06.147 [2024-11-20 16:54:50.912059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:06.147 [2024-11-20 16:54:50.912065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:06.147 [2024-11-20 16:54:50.912071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:06.147 [2024-11-20 16:54:50.912078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:06.147 [2024-11-20 16:54:50.912088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:06.147 [2024-11-20 16:54:50.912095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:06.147 [2024-11-20 16:54:50.912101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:06.147 [2024-11-20 16:54:50.912108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:06.147 [2024-11-20 16:54:50.912114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:06.147 [2024-11-20 16:54:50.912121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:06.147 [2024-11-20 16:54:50.912128] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:06.147 [2024-11-20 16:54:50.912135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:06.147 [2024-11-20 16:54:50.912142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:06.147 [2024-11-20 16:54:50.912149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:06.147 [2024-11-20 16:54:50.912156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:06.147 [2024-11-20 16:54:50.912163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:06.147 [2024-11-20 16:54:50.912173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:06.147 [2024-11-20 16:54:50.912179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:06.147 [2024-11-20 16:54:50.912191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:06.147 [2024-11-20 16:54:50.912201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:06.147 [2024-11-20 16:54:50.912209] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:06.147 [2024-11-20 16:54:50.912221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:06.147 [2024-11-20 16:54:50.912232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:06.147 [2024-11-20 16:54:50.912240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:06.147 [2024-11-20 16:54:50.912247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:06.147 [2024-11-20 16:54:50.912254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:06.147 [2024-11-20 16:54:50.912261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:06.147 [2024-11-20 16:54:50.912268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:06.147 [2024-11-20 16:54:50.912275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:06.147 [2024-11-20 16:54:50.912285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:06.147 [2024-11-20 16:54:50.912292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:06.147 [2024-11-20 16:54:50.912302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:06.147 [2024-11-20 16:54:50.912310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:06.147 [2024-11-20 16:54:50.912320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:06.147 [2024-11-20 16:54:50.912327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:06.147 [2024-11-20 16:54:50.912334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:06.147 [2024-11-20 16:54:50.912342] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:06.147 [2024-11-20 16:54:50.912350] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:06.148 [2024-11-20 16:54:50.912361] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:06.148 [2024-11-20 16:54:50.912371] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:06.148 [2024-11-20 16:54:50.912390] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:06.148 [2024-11-20 16:54:50.912401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:06.148 [2024-11-20 16:54:50.912412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.148 [2024-11-20 16:54:50.912425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:06.148 [2024-11-20 16:54:50.912433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.691 ms 00:28:06.148 [2024-11-20 16:54:50.912440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.148 [2024-11-20 16:54:50.937033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.148 [2024-11-20 16:54:50.937069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:06.148 [2024-11-20 16:54:50.937080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.542 ms 00:28:06.148 [2024-11-20 16:54:50.937087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.148 [2024-11-20 16:54:50.937131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.148 [2024-11-20 16:54:50.937139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:06.148 [2024-11-20 16:54:50.937147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:28:06.148 [2024-11-20 16:54:50.937154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.148 [2024-11-20 16:54:50.967795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.148 [2024-11-20 16:54:50.967833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:06.148 [2024-11-20 16:54:50.967843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.583 ms 00:28:06.148 [2024-11-20 16:54:50.967851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.148 [2024-11-20 16:54:50.967882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.148 [2024-11-20 16:54:50.967890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:06.148 [2024-11-20 16:54:50.967898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:06.148 [2024-11-20 16:54:50.967905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.148 [2024-11-20 16:54:50.968007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.148 [2024-11-20 16:54:50.968024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:06.148 [2024-11-20 16:54:50.968032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:28:06.148 [2024-11-20 16:54:50.968040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.148 [2024-11-20 16:54:50.968076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.148 [2024-11-20 16:54:50.968101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:06.148 [2024-11-20 16:54:50.968109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:28:06.148 [2024-11-20 16:54:50.968116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.148 [2024-11-20 16:54:50.982320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.148 [2024-11-20 16:54:50.982353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:06.148 [2024-11-20 16:54:50.982364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.184 ms 00:28:06.148 [2024-11-20 16:54:50.982372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.148 [2024-11-20 16:54:50.982519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.148 [2024-11-20 16:54:50.982530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:28:06.148 [2024-11-20 16:54:50.982539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:06.148 [2024-11-20 16:54:50.982546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.148 [2024-11-20 16:54:51.009358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.148 [2024-11-20 16:54:51.009427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:28:06.148 [2024-11-20 16:54:51.009444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.792 ms 00:28:06.148 [2024-11-20 16:54:51.009455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.148 [2024-11-20 16:54:51.019045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.148 [2024-11-20 16:54:51.019076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:06.148 [2024-11-20 16:54:51.019092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.510 ms 00:28:06.148 [2024-11-20 16:54:51.019099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.406 [2024-11-20 16:54:51.073142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.406 [2024-11-20 16:54:51.073197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:06.406 [2024-11-20 16:54:51.073213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 53.985 ms 00:28:06.406 [2024-11-20 16:54:51.073222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.406 [2024-11-20 16:54:51.073361] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:28:06.406 [2024-11-20 16:54:51.073476] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:28:06.406 [2024-11-20 16:54:51.073565] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:28:06.406 [2024-11-20 16:54:51.073654] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:28:06.406 [2024-11-20 16:54:51.073663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.406 [2024-11-20 16:54:51.073670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:28:06.406 [2024-11-20 16:54:51.073679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.391 ms 00:28:06.406 [2024-11-20 16:54:51.073686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.406 [2024-11-20 16:54:51.073747] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:28:06.406 [2024-11-20 16:54:51.073758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.406 [2024-11-20 16:54:51.073769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:28:06.406 [2024-11-20 16:54:51.073776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:28:06.406 [2024-11-20 16:54:51.073783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.406 [2024-11-20 16:54:51.087753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.406 [2024-11-20 16:54:51.087792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:28:06.406 [2024-11-20 16:54:51.087803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.948 ms 00:28:06.406 [2024-11-20 16:54:51.087810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.406 [2024-11-20 16:54:51.096398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.406 [2024-11-20 16:54:51.096430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:28:06.406 [2024-11-20 16:54:51.096441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:28:06.406 [2024-11-20 16:54:51.096448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.406 [2024-11-20 16:54:51.096548] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:28:06.406 [2024-11-20 16:54:51.096680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.406 [2024-11-20 16:54:51.096701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:06.406 [2024-11-20 16:54:51.096710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.134 ms 00:28:06.406 [2024-11-20 16:54:51.096717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.664 [2024-11-20 16:54:51.514466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.664 [2024-11-20 16:54:51.514524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:06.664 [2024-11-20 16:54:51.514537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 416.886 ms 00:28:06.664 [2024-11-20 16:54:51.514544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.664 [2024-11-20 16:54:51.517925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.664 [2024-11-20 16:54:51.517969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:06.664 [2024-11-20 16:54:51.517978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.861 ms 00:28:06.664 [2024-11-20 16:54:51.517985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.664 [2024-11-20 16:54:51.518298] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:28:06.664 [2024-11-20 16:54:51.518327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.664 [2024-11-20 16:54:51.518334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:06.664 [2024-11-20 16:54:51.518341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.314 ms 00:28:06.664 [2024-11-20 16:54:51.518348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.664 [2024-11-20 16:54:51.518393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.664 [2024-11-20 16:54:51.518402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:06.664 [2024-11-20 16:54:51.518409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:06.664 [2024-11-20 16:54:51.518415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.664 [2024-11-20 16:54:51.518446] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 421.900 ms, result 0 00:28:06.664 [2024-11-20 16:54:51.518476] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:28:06.664 [2024-11-20 16:54:51.518566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.664 [2024-11-20 16:54:51.518575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:06.664 [2024-11-20 16:54:51.518582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.091 ms 00:28:06.664 [2024-11-20 16:54:51.518587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.230 [2024-11-20 16:54:51.941983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.230 [2024-11-20 16:54:51.942047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:07.230 [2024-11-20 16:54:51.942061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 422.653 ms 00:28:07.230 [2024-11-20 16:54:51.942070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.230 [2024-11-20 16:54:51.945973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.230 [2024-11-20 16:54:51.946007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:07.230 [2024-11-20 16:54:51.946018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.940 ms 00:28:07.230 [2024-11-20 16:54:51.946025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.230 [2024-11-20 16:54:51.946357] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:28:07.230 [2024-11-20 16:54:51.946470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.230 [2024-11-20 16:54:51.946478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:07.230 [2024-11-20 16:54:51.946486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.418 ms 00:28:07.230 [2024-11-20 16:54:51.946493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.230 [2024-11-20 16:54:51.946523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.230 [2024-11-20 16:54:51.946532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:07.230 [2024-11-20 16:54:51.946540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:07.230 [2024-11-20 16:54:51.946547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.230 [2024-11-20 16:54:51.946582] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 428.097 ms, result 0 00:28:07.230 [2024-11-20 16:54:51.946621] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:07.231 [2024-11-20 16:54:51.946631] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:07.231 [2024-11-20 16:54:51.946640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.231 [2024-11-20 16:54:51.946647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:28:07.231 [2024-11-20 16:54:51.946655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 850.109 ms 00:28:07.231 [2024-11-20 16:54:51.946663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.231 [2024-11-20 16:54:51.946691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.231 [2024-11-20 16:54:51.946699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:28:07.231 [2024-11-20 16:54:51.946709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:07.231 [2024-11-20 16:54:51.946716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.231 [2024-11-20 16:54:51.957420] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:07.231 [2024-11-20 16:54:51.957533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.231 [2024-11-20 16:54:51.957544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:07.231 [2024-11-20 16:54:51.957553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.802 ms 00:28:07.231 [2024-11-20 16:54:51.957561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.231 [2024-11-20 16:54:51.958233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.231 [2024-11-20 16:54:51.958258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:28:07.231 [2024-11-20 16:54:51.958269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.607 ms 00:28:07.231 [2024-11-20 16:54:51.958276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.231 [2024-11-20 16:54:51.960540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.231 [2024-11-20 16:54:51.960562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:28:07.231 [2024-11-20 16:54:51.960572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.248 ms 00:28:07.231 [2024-11-20 16:54:51.960580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.231 [2024-11-20 16:54:51.960619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.231 [2024-11-20 16:54:51.960628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:28:07.231 [2024-11-20 16:54:51.960636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:07.231 [2024-11-20 16:54:51.960646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.231 [2024-11-20 16:54:51.960746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.231 [2024-11-20 16:54:51.960761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:07.231 [2024-11-20 16:54:51.960770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:28:07.231 [2024-11-20 16:54:51.960777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.231 [2024-11-20 16:54:51.960797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.231 [2024-11-20 16:54:51.960805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:07.231 [2024-11-20 16:54:51.960813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:07.231 [2024-11-20 16:54:51.960820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.231 [2024-11-20 16:54:51.960845] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:07.231 [2024-11-20 16:54:51.960855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.231 [2024-11-20 16:54:51.960863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:07.231 [2024-11-20 16:54:51.960875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:28:07.231 [2024-11-20 16:54:51.960881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.231 [2024-11-20 16:54:51.960931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.231 [2024-11-20 16:54:51.960940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:07.231 [2024-11-20 16:54:51.960948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:28:07.231 [2024-11-20 16:54:51.960955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.231 [2024-11-20 16:54:51.961896] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1080.029 ms, result 0 00:28:07.231 [2024-11-20 16:54:51.974200] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:07.231 [2024-11-20 16:54:51.990195] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:07.231 [2024-11-20 16:54:51.998320] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:07.489 16:54:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:07.489 16:54:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:07.489 16:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:07.489 16:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:07.489 16:54:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:28:07.489 16:54:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:07.489 16:54:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:07.489 16:54:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:07.489 Validate MD5 checksum, iteration 1 00:28:07.489 16:54:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:07.489 16:54:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:07.489 16:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:07.489 16:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:07.489 16:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:07.489 16:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:07.489 16:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:07.489 [2024-11-20 16:54:52.347989] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:28:07.489 [2024-11-20 16:54:52.348149] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81350 ] 00:28:07.748 [2024-11-20 16:54:52.513661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.748 [2024-11-20 16:54:52.612929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.664  [2024-11-20T16:54:54.843Z] Copying: 688/1024 [MB] (688 MBps) [2024-11-20T16:54:55.777Z] Copying: 1024/1024 [MB] (average 686 MBps) 00:28:10.891 00:28:10.891 16:54:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:10.891 16:54:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:12.791 16:54:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:12.791 16:54:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=23ae04f08ef09a9577c7c655206ee0a1 00:28:12.791 16:54:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 23ae04f08ef09a9577c7c655206ee0a1 != \2\3\a\e\0\4\f\0\8\e\f\0\9\a\9\5\7\7\c\7\c\6\5\5\2\0\6\e\e\0\a\1 ]] 00:28:12.791 16:54:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:12.791 16:54:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:12.791 Validate MD5 checksum, iteration 2 00:28:12.791 16:54:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:12.791 16:54:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:12.791 16:54:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:12.791 16:54:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:12.791 16:54:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:12.791 16:54:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:12.791 16:54:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:12.791 [2024-11-20 16:54:57.446649] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:28:12.791 [2024-11-20 16:54:57.446743] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81407 ] 00:28:12.791 [2024-11-20 16:54:57.601607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.049 [2024-11-20 16:54:57.705176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:14.421  [2024-11-20T16:54:59.873Z] Copying: 638/1024 [MB] (638 MBps) [2024-11-20T16:55:05.199Z] Copying: 1024/1024 [MB] (average 637 MBps) 00:28:20.313 00:28:20.313 16:55:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:20.313 16:55:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:21.686 16:55:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:21.686 16:55:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b2c704c75051911523eb2d69e5e5560d 00:28:21.686 16:55:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b2c704c75051911523eb2d69e5e5560d != \b\2\c\7\0\4\c\7\5\0\5\1\9\1\1\5\2\3\e\b\2\d\6\9\e\5\e\5\5\6\0\d ]] 00:28:21.686 16:55:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:21.686 16:55:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:21.686 16:55:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:21.686 16:55:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:21.686 16:55:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:21.686 16:55:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:21.944 16:55:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:21.944 16:55:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:21.944 16:55:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:21.944 16:55:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:21.944 16:55:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81319 ]] 00:28:21.944 16:55:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81319 00:28:21.944 16:55:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81319 ']' 00:28:21.944 16:55:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 81319 00:28:21.944 16:55:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:28:21.944 16:55:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:21.944 16:55:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81319 00:28:21.944 16:55:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:21.944 16:55:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:21.944 killing process with pid 81319 00:28:21.944 16:55:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81319' 00:28:21.944 16:55:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 81319 00:28:21.944 16:55:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 81319 00:28:22.511 [2024-11-20 16:55:07.238520] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:22.511 [2024-11-20 16:55:07.248660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.511 [2024-11-20 16:55:07.248697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:22.511 [2024-11-20 16:55:07.248707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:22.511 [2024-11-20 16:55:07.248714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.511 [2024-11-20 16:55:07.248731] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:22.511 [2024-11-20 16:55:07.250842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.511 [2024-11-20 16:55:07.250868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:22.511 [2024-11-20 16:55:07.250877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.100 ms 00:28:22.511 [2024-11-20 16:55:07.250887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.511 [2024-11-20 16:55:07.251070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.511 [2024-11-20 16:55:07.251084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:22.511 [2024-11-20 16:55:07.251091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.166 ms 00:28:22.511 [2024-11-20 16:55:07.251097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.511 [2024-11-20 16:55:07.252028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.511 [2024-11-20 16:55:07.252050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:22.511 [2024-11-20 16:55:07.252059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.920 ms 00:28:22.511 [2024-11-20 16:55:07.252065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.511 [2024-11-20 16:55:07.252995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.511 [2024-11-20 16:55:07.253014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:22.512 [2024-11-20 16:55:07.253021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.904 ms 00:28:22.512 [2024-11-20 16:55:07.253027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.512 [2024-11-20 16:55:07.260600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.512 [2024-11-20 16:55:07.260630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:22.512 [2024-11-20 16:55:07.260639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.548 ms 00:28:22.512 [2024-11-20 16:55:07.260650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.512 [2024-11-20 16:55:07.264840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.512 [2024-11-20 16:55:07.264867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:22.512 [2024-11-20 16:55:07.264875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.161 ms 00:28:22.512 [2024-11-20 16:55:07.264882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.512 [2024-11-20 16:55:07.264948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.512 [2024-11-20 16:55:07.264957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:22.512 [2024-11-20 16:55:07.264964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:28:22.512 [2024-11-20 16:55:07.264971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.512 [2024-11-20 16:55:07.272094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.512 [2024-11-20 16:55:07.272123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:22.512 [2024-11-20 16:55:07.272130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.108 ms 00:28:22.512 [2024-11-20 16:55:07.272136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.512 [2024-11-20 16:55:07.278957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.512 [2024-11-20 16:55:07.278984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:22.512 [2024-11-20 16:55:07.278991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.795 ms 00:28:22.512 [2024-11-20 16:55:07.278997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.512 [2024-11-20 16:55:07.285692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.512 [2024-11-20 16:55:07.285717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:22.512 [2024-11-20 16:55:07.285725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.669 ms 00:28:22.512 [2024-11-20 16:55:07.285731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.512 [2024-11-20 16:55:07.292843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.512 [2024-11-20 16:55:07.292869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:22.512 [2024-11-20 16:55:07.292876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.066 ms 00:28:22.512 [2024-11-20 16:55:07.292881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.512 [2024-11-20 16:55:07.292906] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:22.512 [2024-11-20 16:55:07.292918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:22.512 [2024-11-20 16:55:07.292927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:22.512 [2024-11-20 16:55:07.292933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:22.512 [2024-11-20 16:55:07.292939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:22.512 [2024-11-20 16:55:07.292946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:22.512 [2024-11-20 16:55:07.292951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:22.512 [2024-11-20 16:55:07.292957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:22.512 [2024-11-20 16:55:07.292963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:22.512 [2024-11-20 16:55:07.292969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:22.512 [2024-11-20 16:55:07.292976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:22.512 [2024-11-20 16:55:07.292982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:22.512 [2024-11-20 16:55:07.292988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:22.512 [2024-11-20 16:55:07.292994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:22.512 [2024-11-20 16:55:07.292999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:22.512 [2024-11-20 16:55:07.293005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:22.512 [2024-11-20 16:55:07.293010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:22.512 [2024-11-20 16:55:07.293016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:22.512 [2024-11-20 16:55:07.293022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:22.512 [2024-11-20 16:55:07.293030] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:22.512 [2024-11-20 16:55:07.293035] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: f6a6a3bc-f32b-4fce-8012-5f5288180d34 00:28:22.512 [2024-11-20 16:55:07.293041] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:22.512 [2024-11-20 16:55:07.293047] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:22.512 [2024-11-20 16:55:07.293053] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:22.512 [2024-11-20 16:55:07.293060] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:22.512 [2024-11-20 16:55:07.293065] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:22.512 [2024-11-20 16:55:07.293074] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:22.512 [2024-11-20 16:55:07.293082] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:22.512 [2024-11-20 16:55:07.293089] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:22.512 [2024-11-20 16:55:07.293096] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:22.512 [2024-11-20 16:55:07.293103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.512 [2024-11-20 16:55:07.293118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:22.512 [2024-11-20 16:55:07.293127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.197 ms 00:28:22.512 [2024-11-20 16:55:07.293136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.512 [2024-11-20 16:55:07.302977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.512 [2024-11-20 16:55:07.303006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:22.512 [2024-11-20 16:55:07.303015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.823 ms 00:28:22.512 [2024-11-20 16:55:07.303021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.512 [2024-11-20 16:55:07.303300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.512 [2024-11-20 16:55:07.303313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:22.512 [2024-11-20 16:55:07.303319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.263 ms 00:28:22.512 [2024-11-20 16:55:07.303325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.512 [2024-11-20 16:55:07.336865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:22.512 [2024-11-20 16:55:07.336914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:22.512 [2024-11-20 16:55:07.336925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:22.512 [2024-11-20 16:55:07.336932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.512 [2024-11-20 16:55:07.336970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:22.512 [2024-11-20 16:55:07.336978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:22.512 [2024-11-20 16:55:07.336984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:22.512 [2024-11-20 16:55:07.336990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.512 [2024-11-20 16:55:07.337052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:22.512 [2024-11-20 16:55:07.337060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:22.512 [2024-11-20 16:55:07.337067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:22.512 [2024-11-20 16:55:07.337073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.512 [2024-11-20 16:55:07.337086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:22.512 [2024-11-20 16:55:07.337096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:22.512 [2024-11-20 16:55:07.337101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:22.512 [2024-11-20 16:55:07.337107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.805 [2024-11-20 16:55:07.395938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:22.805 [2024-11-20 16:55:07.395976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:22.805 [2024-11-20 16:55:07.395986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:22.805 [2024-11-20 16:55:07.395991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.805 [2024-11-20 16:55:07.443753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:22.805 [2024-11-20 16:55:07.443795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:22.805 [2024-11-20 16:55:07.443803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:22.805 [2024-11-20 16:55:07.443810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.805 [2024-11-20 16:55:07.443879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:22.805 [2024-11-20 16:55:07.443887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:22.805 [2024-11-20 16:55:07.443893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:22.805 [2024-11-20 16:55:07.443899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.805 [2024-11-20 16:55:07.443930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:22.805 [2024-11-20 16:55:07.443937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:22.805 [2024-11-20 16:55:07.443945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:22.805 [2024-11-20 16:55:07.443956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.805 [2024-11-20 16:55:07.444024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:22.805 [2024-11-20 16:55:07.444032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:22.805 [2024-11-20 16:55:07.444038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:22.805 [2024-11-20 16:55:07.444043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.805 [2024-11-20 16:55:07.444066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:22.805 [2024-11-20 16:55:07.444073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:22.805 [2024-11-20 16:55:07.444079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:22.805 [2024-11-20 16:55:07.444086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.805 [2024-11-20 16:55:07.444113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:22.805 [2024-11-20 16:55:07.444120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:22.805 [2024-11-20 16:55:07.444126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:22.805 [2024-11-20 16:55:07.444132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.805 [2024-11-20 16:55:07.444164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:22.805 [2024-11-20 16:55:07.444171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:22.805 [2024-11-20 16:55:07.444179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:22.805 [2024-11-20 16:55:07.444185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.805 [2024-11-20 16:55:07.444274] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 195.592 ms, result 0 00:28:23.370 16:55:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:23.370 16:55:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:23.370 16:55:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:23.370 16:55:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:23.370 16:55:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:23.370 16:55:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:23.370 16:55:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:23.370 16:55:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:23.370 Remove shared memory files 00:28:23.370 16:55:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:23.370 16:55:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:23.370 16:55:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid81092 00:28:23.370 16:55:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:23.370 16:55:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:23.370 ************************************ 00:28:23.370 END TEST ftl_upgrade_shutdown 00:28:23.370 ************************************ 00:28:23.370 00:28:23.370 real 1m24.273s 00:28:23.370 user 1m53.991s 00:28:23.370 sys 0m18.566s 00:28:23.370 16:55:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:23.370 16:55:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:23.370 16:55:08 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:28:23.370 16:55:08 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:28:23.370 16:55:08 ftl -- ftl/ftl.sh@14 -- # killprocess 74969 00:28:23.370 16:55:08 ftl -- common/autotest_common.sh@954 -- # '[' -z 74969 ']' 00:28:23.370 16:55:08 ftl -- common/autotest_common.sh@958 -- # kill -0 74969 00:28:23.370 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74969) - No such process 00:28:23.370 Process with pid 74969 is not found 00:28:23.370 16:55:08 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 74969 is not found' 00:28:23.370 16:55:08 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:28:23.370 16:55:08 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=81554 00:28:23.370 16:55:08 ftl -- ftl/ftl.sh@20 -- # waitforlisten 81554 00:28:23.370 16:55:08 ftl -- common/autotest_common.sh@835 -- # '[' -z 81554 ']' 00:28:23.370 16:55:08 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:23.370 16:55:08 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:23.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:23.370 16:55:08 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:23.370 16:55:08 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:23.371 16:55:08 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:23.371 16:55:08 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:23.371 [2024-11-20 16:55:08.202628] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:28:23.371 [2024-11-20 16:55:08.202745] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81554 ] 00:28:23.629 [2024-11-20 16:55:08.359131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.629 [2024-11-20 16:55:08.457167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.194 16:55:09 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:24.194 16:55:09 ftl -- common/autotest_common.sh@868 -- # return 0 00:28:24.194 16:55:09 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:24.470 nvme0n1 00:28:24.470 16:55:09 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:28:24.470 16:55:09 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:24.470 16:55:09 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:24.728 16:55:09 ftl -- ftl/common.sh@28 -- # stores=ad3a925a-cb74-4730-bceb-bb394639f4cd 00:28:24.728 16:55:09 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:28:24.728 16:55:09 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ad3a925a-cb74-4730-bceb-bb394639f4cd 00:28:24.984 16:55:09 ftl -- ftl/ftl.sh@23 -- # killprocess 81554 00:28:24.984 16:55:09 ftl -- common/autotest_common.sh@954 -- # '[' -z 81554 ']' 00:28:24.984 16:55:09 ftl -- common/autotest_common.sh@958 -- # kill -0 81554 00:28:24.984 16:55:09 ftl -- common/autotest_common.sh@959 -- # uname 00:28:24.984 16:55:09 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:24.984 16:55:09 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81554 00:28:24.984 16:55:09 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:24.984 16:55:09 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:24.984 killing process with pid 81554 00:28:24.985 16:55:09 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81554' 00:28:24.985 16:55:09 ftl -- common/autotest_common.sh@973 -- # kill 81554 00:28:24.985 16:55:09 ftl -- common/autotest_common.sh@978 -- # wait 81554 00:28:26.358 16:55:11 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:26.358 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:26.616 Waiting for block devices as requested 00:28:26.616 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:26.616 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:26.616 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:28:26.873 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:28:32.206 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:28:32.206 16:55:16 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:28:32.206 Remove shared memory files 00:28:32.206 16:55:16 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:32.206 16:55:16 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:28:32.206 16:55:16 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:28:32.206 16:55:16 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:28:32.206 16:55:16 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:32.206 16:55:16 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:28:32.206 00:28:32.206 real 9m31.082s 00:28:32.206 user 11m28.312s 00:28:32.206 sys 1m25.381s 00:28:32.206 16:55:16 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:32.206 16:55:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:32.206 ************************************ 00:28:32.206 END TEST ftl 00:28:32.206 ************************************ 00:28:32.206 16:55:16 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:28:32.206 16:55:16 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:28:32.206 16:55:16 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:28:32.206 16:55:16 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:28:32.206 16:55:16 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:28:32.206 16:55:16 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:28:32.206 16:55:16 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:28:32.206 16:55:16 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:28:32.206 16:55:16 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:28:32.206 16:55:16 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:28:32.206 16:55:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:32.206 16:55:16 -- common/autotest_common.sh@10 -- # set +x 00:28:32.206 16:55:16 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:28:32.206 16:55:16 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:28:32.206 16:55:16 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:28:32.206 16:55:16 -- common/autotest_common.sh@10 -- # set +x 00:28:33.142 INFO: APP EXITING 00:28:33.142 INFO: killing all VMs 00:28:33.142 INFO: killing vhost app 00:28:33.142 INFO: EXIT DONE 00:28:33.401 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:33.660 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:28:33.660 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:28:33.660 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:28:33.660 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:28:33.917 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:34.175 Cleaning 00:28:34.175 Removing: /var/run/dpdk/spdk0/config 00:28:34.175 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:34.175 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:34.175 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:34.175 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:34.175 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:34.175 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:34.175 Removing: /var/run/dpdk/spdk0 00:28:34.175 Removing: /var/run/dpdk/spdk_pid56924 00:28:34.175 Removing: /var/run/dpdk/spdk_pid57126 00:28:34.175 Removing: /var/run/dpdk/spdk_pid57338 00:28:34.175 Removing: /var/run/dpdk/spdk_pid57431 00:28:34.436 Removing: /var/run/dpdk/spdk_pid57465 00:28:34.436 Removing: /var/run/dpdk/spdk_pid57588 00:28:34.436 Removing: /var/run/dpdk/spdk_pid57606 00:28:34.436 Removing: /var/run/dpdk/spdk_pid57794 00:28:34.436 Removing: /var/run/dpdk/spdk_pid57887 00:28:34.436 Removing: /var/run/dpdk/spdk_pid57983 00:28:34.436 Removing: /var/run/dpdk/spdk_pid58094 00:28:34.436 Removing: /var/run/dpdk/spdk_pid58185 00:28:34.436 Removing: /var/run/dpdk/spdk_pid58225 00:28:34.436 Removing: /var/run/dpdk/spdk_pid58256 00:28:34.436 Removing: /var/run/dpdk/spdk_pid58332 00:28:34.436 Removing: /var/run/dpdk/spdk_pid58416 00:28:34.436 Removing: /var/run/dpdk/spdk_pid58841 00:28:34.436 Removing: /var/run/dpdk/spdk_pid58905 00:28:34.436 Removing: /var/run/dpdk/spdk_pid58957 00:28:34.436 Removing: /var/run/dpdk/spdk_pid58974 00:28:34.436 Removing: /var/run/dpdk/spdk_pid59076 00:28:34.436 Removing: /var/run/dpdk/spdk_pid59092 00:28:34.436 Removing: /var/run/dpdk/spdk_pid59183 00:28:34.436 Removing: /var/run/dpdk/spdk_pid59199 00:28:34.436 Removing: /var/run/dpdk/spdk_pid59258 00:28:34.436 Removing: /var/run/dpdk/spdk_pid59270 00:28:34.436 Removing: /var/run/dpdk/spdk_pid59323 00:28:34.437 Removing: /var/run/dpdk/spdk_pid59341 00:28:34.437 Removing: /var/run/dpdk/spdk_pid59496 00:28:34.437 Removing: /var/run/dpdk/spdk_pid59532 00:28:34.437 Removing: /var/run/dpdk/spdk_pid59616 00:28:34.437 Removing: /var/run/dpdk/spdk_pid59788 00:28:34.437 Removing: /var/run/dpdk/spdk_pid59872 00:28:34.437 Removing: /var/run/dpdk/spdk_pid59908 00:28:34.437 Removing: /var/run/dpdk/spdk_pid60336 00:28:34.437 Removing: /var/run/dpdk/spdk_pid60434 00:28:34.437 Removing: /var/run/dpdk/spdk_pid60543 00:28:34.437 Removing: /var/run/dpdk/spdk_pid60596 00:28:34.437 Removing: /var/run/dpdk/spdk_pid60616 00:28:34.437 Removing: /var/run/dpdk/spdk_pid60702 00:28:34.437 Removing: /var/run/dpdk/spdk_pid61321 00:28:34.437 Removing: /var/run/dpdk/spdk_pid61363 00:28:34.437 Removing: /var/run/dpdk/spdk_pid61832 00:28:34.437 Removing: /var/run/dpdk/spdk_pid61925 00:28:34.437 Removing: /var/run/dpdk/spdk_pid62045 00:28:34.437 Removing: /var/run/dpdk/spdk_pid62099 00:28:34.437 Removing: /var/run/dpdk/spdk_pid62130 00:28:34.437 Removing: /var/run/dpdk/spdk_pid62150 00:28:34.437 Removing: /var/run/dpdk/spdk_pid64018 00:28:34.437 Removing: /var/run/dpdk/spdk_pid64155 00:28:34.437 Removing: /var/run/dpdk/spdk_pid64159 00:28:34.437 Removing: /var/run/dpdk/spdk_pid64171 00:28:34.437 Removing: /var/run/dpdk/spdk_pid64222 00:28:34.437 Removing: /var/run/dpdk/spdk_pid64226 00:28:34.437 Removing: /var/run/dpdk/spdk_pid64238 00:28:34.437 Removing: /var/run/dpdk/spdk_pid64277 00:28:34.437 Removing: /var/run/dpdk/spdk_pid64281 00:28:34.437 Removing: /var/run/dpdk/spdk_pid64293 00:28:34.437 Removing: /var/run/dpdk/spdk_pid64338 00:28:34.437 Removing: /var/run/dpdk/spdk_pid64342 00:28:34.437 Removing: /var/run/dpdk/spdk_pid64354 00:28:34.437 Removing: /var/run/dpdk/spdk_pid65741 00:28:34.437 Removing: /var/run/dpdk/spdk_pid65845 00:28:34.437 Removing: /var/run/dpdk/spdk_pid67247 00:28:34.437 Removing: /var/run/dpdk/spdk_pid69003 00:28:34.437 Removing: /var/run/dpdk/spdk_pid69072 00:28:34.437 Removing: /var/run/dpdk/spdk_pid69148 00:28:34.437 Removing: /var/run/dpdk/spdk_pid69252 00:28:34.437 Removing: /var/run/dpdk/spdk_pid69344 00:28:34.437 Removing: /var/run/dpdk/spdk_pid69438 00:28:34.437 Removing: /var/run/dpdk/spdk_pid69508 00:28:34.437 Removing: /var/run/dpdk/spdk_pid69583 00:28:34.437 Removing: /var/run/dpdk/spdk_pid69694 00:28:34.437 Removing: /var/run/dpdk/spdk_pid69786 00:28:34.438 Removing: /var/run/dpdk/spdk_pid69881 00:28:34.438 Removing: /var/run/dpdk/spdk_pid69950 00:28:34.438 Removing: /var/run/dpdk/spdk_pid70025 00:28:34.438 Removing: /var/run/dpdk/spdk_pid70129 00:28:34.438 Removing: /var/run/dpdk/spdk_pid70221 00:28:34.438 Removing: /var/run/dpdk/spdk_pid70317 00:28:34.438 Removing: /var/run/dpdk/spdk_pid70380 00:28:34.438 Removing: /var/run/dpdk/spdk_pid70461 00:28:34.438 Removing: /var/run/dpdk/spdk_pid70565 00:28:34.438 Removing: /var/run/dpdk/spdk_pid70657 00:28:34.438 Removing: /var/run/dpdk/spdk_pid70747 00:28:34.438 Removing: /var/run/dpdk/spdk_pid70821 00:28:34.438 Removing: /var/run/dpdk/spdk_pid70895 00:28:34.438 Removing: /var/run/dpdk/spdk_pid70968 00:28:34.438 Removing: /var/run/dpdk/spdk_pid71044 00:28:34.438 Removing: /var/run/dpdk/spdk_pid71147 00:28:34.438 Removing: /var/run/dpdk/spdk_pid71242 00:28:34.438 Removing: /var/run/dpdk/spdk_pid71331 00:28:34.438 Removing: /var/run/dpdk/spdk_pid71405 00:28:34.438 Removing: /var/run/dpdk/spdk_pid71479 00:28:34.438 Removing: /var/run/dpdk/spdk_pid71548 00:28:34.438 Removing: /var/run/dpdk/spdk_pid71629 00:28:34.438 Removing: /var/run/dpdk/spdk_pid71732 00:28:34.438 Removing: /var/run/dpdk/spdk_pid71823 00:28:34.438 Removing: /var/run/dpdk/spdk_pid71971 00:28:34.438 Removing: /var/run/dpdk/spdk_pid72245 00:28:34.438 Removing: /var/run/dpdk/spdk_pid72282 00:28:34.438 Removing: /var/run/dpdk/spdk_pid72725 00:28:34.438 Removing: /var/run/dpdk/spdk_pid72907 00:28:34.438 Removing: /var/run/dpdk/spdk_pid73007 00:28:34.438 Removing: /var/run/dpdk/spdk_pid73111 00:28:34.438 Removing: /var/run/dpdk/spdk_pid73159 00:28:34.438 Removing: /var/run/dpdk/spdk_pid73184 00:28:34.438 Removing: /var/run/dpdk/spdk_pid73499 00:28:34.438 Removing: /var/run/dpdk/spdk_pid73555 00:28:34.438 Removing: /var/run/dpdk/spdk_pid73634 00:28:34.438 Removing: /var/run/dpdk/spdk_pid74024 00:28:34.438 Removing: /var/run/dpdk/spdk_pid74169 00:28:34.438 Removing: /var/run/dpdk/spdk_pid74969 00:28:34.438 Removing: /var/run/dpdk/spdk_pid75101 00:28:34.438 Removing: /var/run/dpdk/spdk_pid75287 00:28:34.438 Removing: /var/run/dpdk/spdk_pid75374 00:28:34.438 Removing: /var/run/dpdk/spdk_pid75671 00:28:34.438 Removing: /var/run/dpdk/spdk_pid75909 00:28:34.438 Removing: /var/run/dpdk/spdk_pid76250 00:28:34.438 Removing: /var/run/dpdk/spdk_pid76432 00:28:34.438 Removing: /var/run/dpdk/spdk_pid76523 00:28:34.439 Removing: /var/run/dpdk/spdk_pid76576 00:28:34.699 Removing: /var/run/dpdk/spdk_pid76665 00:28:34.699 Removing: /var/run/dpdk/spdk_pid76690 00:28:34.699 Removing: /var/run/dpdk/spdk_pid76748 00:28:34.699 Removing: /var/run/dpdk/spdk_pid76912 00:28:34.699 Removing: /var/run/dpdk/spdk_pid77133 00:28:34.699 Removing: /var/run/dpdk/spdk_pid77400 00:28:34.699 Removing: /var/run/dpdk/spdk_pid77898 00:28:34.699 Removing: /var/run/dpdk/spdk_pid78173 00:28:34.699 Removing: /var/run/dpdk/spdk_pid78522 00:28:34.699 Removing: /var/run/dpdk/spdk_pid78653 00:28:34.699 Removing: /var/run/dpdk/spdk_pid78741 00:28:34.699 Removing: /var/run/dpdk/spdk_pid79115 00:28:34.699 Removing: /var/run/dpdk/spdk_pid79179 00:28:34.699 Removing: /var/run/dpdk/spdk_pid79509 00:28:34.699 Removing: /var/run/dpdk/spdk_pid79779 00:28:34.699 Removing: /var/run/dpdk/spdk_pid80571 00:28:34.699 Removing: /var/run/dpdk/spdk_pid80682 00:28:34.699 Removing: /var/run/dpdk/spdk_pid80720 00:28:34.699 Removing: /var/run/dpdk/spdk_pid80785 00:28:34.699 Removing: /var/run/dpdk/spdk_pid80844 00:28:34.699 Removing: /var/run/dpdk/spdk_pid80903 00:28:34.699 Removing: /var/run/dpdk/spdk_pid81092 00:28:34.699 Removing: /var/run/dpdk/spdk_pid81153 00:28:34.699 Removing: /var/run/dpdk/spdk_pid81259 00:28:34.699 Removing: /var/run/dpdk/spdk_pid81319 00:28:34.699 Removing: /var/run/dpdk/spdk_pid81350 00:28:34.699 Removing: /var/run/dpdk/spdk_pid81407 00:28:34.699 Removing: /var/run/dpdk/spdk_pid81554 00:28:34.699 Clean 00:28:34.699 16:55:19 -- common/autotest_common.sh@1453 -- # return 0 00:28:34.699 16:55:19 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:28:34.699 16:55:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:34.699 16:55:19 -- common/autotest_common.sh@10 -- # set +x 00:28:34.699 16:55:19 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:28:34.699 16:55:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:34.699 16:55:19 -- common/autotest_common.sh@10 -- # set +x 00:28:34.699 16:55:19 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:34.699 16:55:19 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:34.699 16:55:19 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:34.699 16:55:19 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:28:34.699 16:55:19 -- spdk/autotest.sh@398 -- # hostname 00:28:34.699 16:55:19 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:34.957 geninfo: WARNING: invalid characters removed from testname! 00:29:01.508 16:55:42 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:01.508 16:55:46 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:03.527 16:55:48 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:05.439 16:55:50 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:07.337 16:55:51 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:09.233 16:55:53 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:11.133 16:55:55 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:11.133 16:55:55 -- spdk/autorun.sh@1 -- $ timing_finish 00:29:11.133 16:55:55 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:29:11.133 16:55:55 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:11.133 16:55:55 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:29:11.133 16:55:55 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:11.133 + [[ -n 5019 ]] 00:29:11.133 + sudo kill 5019 00:29:11.141 [Pipeline] } 00:29:11.157 [Pipeline] // timeout 00:29:11.163 [Pipeline] } 00:29:11.177 [Pipeline] // stage 00:29:11.183 [Pipeline] } 00:29:11.197 [Pipeline] // catchError 00:29:11.207 [Pipeline] stage 00:29:11.209 [Pipeline] { (Stop VM) 00:29:11.224 [Pipeline] sh 00:29:11.501 + vagrant halt 00:29:14.053 ==> default: Halting domain... 00:29:17.347 [Pipeline] sh 00:29:17.635 + vagrant destroy -f 00:29:20.182 ==> default: Removing domain... 00:29:20.762 [Pipeline] sh 00:29:21.041 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:29:21.049 [Pipeline] } 00:29:21.063 [Pipeline] // stage 00:29:21.068 [Pipeline] } 00:29:21.081 [Pipeline] // dir 00:29:21.086 [Pipeline] } 00:29:21.100 [Pipeline] // wrap 00:29:21.106 [Pipeline] } 00:29:21.118 [Pipeline] // catchError 00:29:21.127 [Pipeline] stage 00:29:21.129 [Pipeline] { (Epilogue) 00:29:21.141 [Pipeline] sh 00:29:21.419 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:26.730 [Pipeline] catchError 00:29:26.733 [Pipeline] { 00:29:26.748 [Pipeline] sh 00:29:27.029 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:27.029 Artifacts sizes are good 00:29:27.037 [Pipeline] } 00:29:27.053 [Pipeline] // catchError 00:29:27.067 [Pipeline] archiveArtifacts 00:29:27.076 Archiving artifacts 00:29:27.190 [Pipeline] cleanWs 00:29:27.201 [WS-CLEANUP] Deleting project workspace... 00:29:27.201 [WS-CLEANUP] Deferred wipeout is used... 00:29:27.221 [WS-CLEANUP] done 00:29:27.223 [Pipeline] } 00:29:27.239 [Pipeline] // stage 00:29:27.244 [Pipeline] } 00:29:27.260 [Pipeline] // node 00:29:27.265 [Pipeline] End of Pipeline 00:29:27.308 Finished: SUCCESS